00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2438 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3703 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.304 The recommended git tool is: git 00:00:01.304 using credential 00000000-0000-0000-0000-000000000002 00:00:01.306 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.340 Fetching changes from the remote Git repository 00:00:01.343 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.360 Using shallow fetch with depth 1 00:00:01.360 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.360 > git --version # timeout=10 00:00:01.377 > git --version # 'git version 2.39.2' 00:00:01.377 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.393 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.393 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:10.422 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:10.433 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.448 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.448 > git config core.sparsecheckout # timeout=10 00:00:10.461 > git read-tree -mu HEAD # timeout=10 00:00:10.479 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.504 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.504 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.642 [Pipeline] Start of Pipeline 00:00:10.656 [Pipeline] library 00:00:10.657 Loading library shm_lib@master 00:00:10.657 Library shm_lib@master is cached. Copying from home. 00:00:10.678 [Pipeline] node 00:00:10.688 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:10.690 [Pipeline] { 00:00:10.698 [Pipeline] catchError 00:00:10.699 [Pipeline] { 00:00:10.708 [Pipeline] wrap 00:00:10.715 [Pipeline] { 00:00:10.721 [Pipeline] stage 00:00:10.722 [Pipeline] { (Prologue) 00:00:10.916 [Pipeline] sh 00:00:11.202 + logger -p user.info -t JENKINS-CI 00:00:11.220 [Pipeline] echo 00:00:11.222 Node: CYP11 00:00:11.228 [Pipeline] sh 00:00:11.525 [Pipeline] setCustomBuildProperty 00:00:11.536 [Pipeline] echo 00:00:11.538 Cleanup processes 00:00:11.542 [Pipeline] sh 00:00:11.824 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.824 1862782 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.837 [Pipeline] sh 00:00:12.118 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:12.118 ++ grep -v 'sudo pgrep' 00:00:12.118 ++ awk '{print $1}' 00:00:12.118 + sudo kill -9 00:00:12.118 + true 00:00:12.134 [Pipeline] cleanWs 00:00:12.144 [WS-CLEANUP] Deleting project workspace... 00:00:12.144 [WS-CLEANUP] Deferred wipeout is used... 00:00:12.150 [WS-CLEANUP] done 00:00:12.153 [Pipeline] setCustomBuildProperty 00:00:12.163 [Pipeline] sh 00:00:12.443 + sudo git config --global --replace-all safe.directory '*' 00:00:12.511 [Pipeline] httpRequest 00:00:13.054 [Pipeline] echo 00:00:13.055 Sorcerer 10.211.164.101 is alive 00:00:13.062 [Pipeline] retry 00:00:13.064 [Pipeline] { 00:00:13.073 [Pipeline] httpRequest 00:00:13.077 HttpMethod: GET 00:00:13.077 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.078 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.104 Response Code: HTTP/1.1 200 OK 00:00:13.104 Success: Status code 200 is in the accepted range: 200,404 00:00:13.104 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.227 [Pipeline] } 00:00:25.244 [Pipeline] // retry 00:00:25.252 [Pipeline] sh 00:00:25.535 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.553 [Pipeline] httpRequest 00:00:25.905 [Pipeline] echo 00:00:25.906 Sorcerer 10.211.164.101 is alive 00:00:25.915 [Pipeline] retry 00:00:25.917 [Pipeline] { 00:00:25.928 [Pipeline] httpRequest 00:00:25.932 HttpMethod: GET 00:00:25.932 URL: http://10.211.164.101/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:25.933 Sending request to url: http://10.211.164.101/packages/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:00:25.947 Response Code: HTTP/1.1 200 OK 00:00:25.947 Success: Status code 200 is in the accepted range: 200,404 00:00:25.948 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:02:56.622 [Pipeline] } 00:02:56.632 [Pipeline] // retry 00:02:56.636 [Pipeline] sh 00:02:56.910 + tar --no-same-owner -xf spdk_a5e6ecf28fd8e9a86690362af173cd2cf51891ee.tar.gz 00:02:59.452 [Pipeline] sh 00:02:59.731 + git -C spdk log --oneline -n5 00:02:59.731 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:02:59.731 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:02:59.731 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:02:59.731 e2dfdf06c accel/mlx5: Register post_poller handler 00:02:59.731 3c8001115 accel/mlx5: More precise condition to update DB 00:02:59.747 [Pipeline] withCredentials 00:02:59.758 > git --version # timeout=10 00:02:59.770 > git --version # 'git version 2.39.2' 00:02:59.787 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:59.789 [Pipeline] { 00:02:59.796 [Pipeline] retry 00:02:59.797 [Pipeline] { 00:02:59.811 [Pipeline] sh 00:03:00.090 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:03:00.101 [Pipeline] } 00:03:00.116 [Pipeline] // retry 00:03:00.121 [Pipeline] } 00:03:00.135 [Pipeline] // withCredentials 00:03:00.144 [Pipeline] httpRequest 00:03:00.586 [Pipeline] echo 00:03:00.588 Sorcerer 10.211.164.101 is alive 00:03:00.598 [Pipeline] retry 00:03:00.600 [Pipeline] { 00:03:00.612 [Pipeline] httpRequest 00:03:00.617 HttpMethod: GET 00:03:00.617 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:00.618 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:00.623 Response Code: HTTP/1.1 200 OK 00:03:00.624 Success: Status code 200 is in the accepted range: 200,404 00:03:00.624 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:29.990 [Pipeline] } 00:03:30.010 [Pipeline] // retry 00:03:30.020 [Pipeline] sh 00:03:30.307 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:03:31.696 [Pipeline] sh 00:03:31.973 + git -C dpdk log --oneline -n5 00:03:31.973 caf0f5d395 version: 22.11.4 00:03:31.973 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:31.973 dc9c799c7d vhost: fix missing spinlock unlock 00:03:31.973 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:31.973 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:32.003 [Pipeline] } 00:03:32.023 [Pipeline] // stage 00:03:32.029 [Pipeline] stage 00:03:32.031 [Pipeline] { (Prepare) 00:03:32.044 [Pipeline] writeFile 00:03:32.053 [Pipeline] sh 00:03:32.328 + logger -p user.info -t JENKINS-CI 00:03:32.339 [Pipeline] sh 00:03:32.631 + logger -p user.info -t JENKINS-CI 00:03:32.663 [Pipeline] sh 00:03:32.940 + cat autorun-spdk.conf 00:03:32.940 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:32.940 SPDK_TEST_NVMF=1 00:03:32.940 SPDK_TEST_NVME_CLI=1 00:03:32.940 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:32.940 SPDK_TEST_NVMF_NICS=e810 00:03:32.940 SPDK_TEST_VFIOUSER=1 00:03:32.940 SPDK_RUN_UBSAN=1 00:03:32.940 NET_TYPE=phy 00:03:32.940 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:32.940 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:32.946 RUN_NIGHTLY=1 00:03:32.950 [Pipeline] readFile 00:03:32.976 [Pipeline] withEnv 00:03:32.978 [Pipeline] { 00:03:32.990 [Pipeline] sh 00:03:33.268 + set -ex 00:03:33.268 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:33.268 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:33.268 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.268 ++ SPDK_TEST_NVMF=1 00:03:33.268 ++ SPDK_TEST_NVME_CLI=1 00:03:33.268 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:33.268 ++ SPDK_TEST_NVMF_NICS=e810 00:03:33.268 ++ SPDK_TEST_VFIOUSER=1 00:03:33.268 ++ SPDK_RUN_UBSAN=1 00:03:33.268 ++ NET_TYPE=phy 00:03:33.268 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:33.268 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:33.268 ++ RUN_NIGHTLY=1 00:03:33.268 + case $SPDK_TEST_NVMF_NICS in 00:03:33.268 + DRIVERS=ice 00:03:33.268 + [[ tcp == \r\d\m\a ]] 00:03:33.268 + [[ -n ice ]] 00:03:33.268 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:33.268 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:33.268 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:33.268 rmmod: ERROR: Module irdma is not currently loaded 00:03:33.268 rmmod: ERROR: Module i40iw is not currently loaded 00:03:33.268 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:33.268 + true 00:03:33.268 + for D in $DRIVERS 00:03:33.268 + sudo modprobe ice 00:03:33.268 + exit 0 00:03:33.276 [Pipeline] } 00:03:33.289 [Pipeline] // withEnv 00:03:33.294 [Pipeline] } 00:03:33.307 [Pipeline] // stage 00:03:33.316 [Pipeline] catchError 00:03:33.318 [Pipeline] { 00:03:33.331 [Pipeline] timeout 00:03:33.331 Timeout set to expire in 1 hr 0 min 00:03:33.332 [Pipeline] { 00:03:33.345 [Pipeline] stage 00:03:33.346 [Pipeline] { (Tests) 00:03:33.359 [Pipeline] sh 00:03:33.638 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.638 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.638 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.638 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:33.638 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.638 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:33.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:33.638 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:33.638 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:33.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:33.638 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:33.638 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:33.638 + source /etc/os-release 00:03:33.638 ++ NAME='Fedora Linux' 00:03:33.638 ++ VERSION='39 (Cloud Edition)' 00:03:33.638 ++ ID=fedora 00:03:33.639 ++ VERSION_ID=39 00:03:33.639 ++ VERSION_CODENAME= 00:03:33.639 ++ PLATFORM_ID=platform:f39 00:03:33.639 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:33.639 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:33.639 ++ LOGO=fedora-logo-icon 00:03:33.639 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:33.639 ++ HOME_URL=https://fedoraproject.org/ 00:03:33.639 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:33.639 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:33.639 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:33.639 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:33.639 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:33.639 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:33.639 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:33.639 ++ SUPPORT_END=2024-11-12 00:03:33.639 ++ VARIANT='Cloud Edition' 00:03:33.639 ++ VARIANT_ID=cloud 00:03:33.639 + uname -a 00:03:33.639 Linux spdk-cyp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:33.639 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:36.173 Hugepages 00:03:36.173 node hugesize free / total 00:03:36.173 node0 1048576kB 0 / 0 00:03:36.173 node0 2048kB 0 / 0 00:03:36.173 node1 1048576kB 0 / 0 00:03:36.173 node1 2048kB 0 / 0 00:03:36.173 00:03:36.173 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:36.173 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:36.173 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:36.173 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:36.173 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:36.173 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:36.173 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:36.173 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:36.173 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:36.173 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:36.173 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:36.173 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:36.173 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:36.173 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:36.173 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:36.173 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:36.173 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:36.173 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:36.173 + rm -f /tmp/spdk-ld-path 00:03:36.173 + source autorun-spdk.conf 00:03:36.173 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:36.173 ++ SPDK_TEST_NVMF=1 00:03:36.173 ++ SPDK_TEST_NVME_CLI=1 00:03:36.173 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:36.173 ++ SPDK_TEST_NVMF_NICS=e810 00:03:36.173 ++ SPDK_TEST_VFIOUSER=1 00:03:36.173 ++ SPDK_RUN_UBSAN=1 00:03:36.173 ++ NET_TYPE=phy 00:03:36.173 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:36.173 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:36.173 ++ RUN_NIGHTLY=1 00:03:36.173 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:36.173 + [[ -n '' ]] 00:03:36.173 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:36.173 + for M in /var/spdk/build-*-manifest.txt 00:03:36.173 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:36.173 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:36.173 + for M in /var/spdk/build-*-manifest.txt 00:03:36.173 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:36.173 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:36.173 + for M in /var/spdk/build-*-manifest.txt 00:03:36.173 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:36.173 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:36.173 ++ uname 00:03:36.173 + [[ Linux == \L\i\n\u\x ]] 00:03:36.173 + sudo dmesg -T 00:03:36.173 + sudo dmesg --clear 00:03:36.173 + dmesg_pid=1865023 00:03:36.173 + [[ Fedora Linux == FreeBSD ]] 00:03:36.173 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:36.173 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:36.173 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:36.173 + [[ -x /usr/src/fio-static/fio ]] 00:03:36.173 + export FIO_BIN=/usr/src/fio-static/fio 00:03:36.173 + FIO_BIN=/usr/src/fio-static/fio 00:03:36.173 + sudo dmesg -Tw 00:03:36.173 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:36.173 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:36.173 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:36.173 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:36.173 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:36.173 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:36.173 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:36.173 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:36.173 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:36.173 16:30:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:36.173 16:30:24 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:36.173 16:30:24 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:03:36.173 16:30:24 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:36.173 16:30:24 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:36.173 16:30:24 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:36.173 16:30:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:36.173 16:30:24 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:36.173 16:30:24 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:36.173 16:30:24 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:36.173 16:30:24 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:36.173 16:30:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.174 16:30:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.174 16:30:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.174 16:30:24 -- paths/export.sh@5 -- $ export PATH 00:03:36.174 16:30:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.174 16:30:24 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:36.174 16:30:24 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:36.174 16:30:24 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733499024.XXXXXX 00:03:36.174 16:30:24 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733499024.J3ynP9 00:03:36.174 16:30:24 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:36.174 16:30:24 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:03:36.174 16:30:24 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:36.174 16:30:24 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:03:36.174 16:30:24 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:36.174 16:30:24 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:36.174 16:30:24 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:36.174 16:30:24 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:36.174 16:30:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.174 16:30:24 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:03:36.174 16:30:24 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:36.174 16:30:24 -- pm/common@17 -- $ local monitor 00:03:36.174 16:30:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.174 16:30:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.174 16:30:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.174 16:30:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.174 16:30:24 -- pm/common@25 -- $ sleep 1 00:03:36.174 16:30:24 -- pm/common@21 -- $ date +%s 00:03:36.174 16:30:24 -- pm/common@21 -- $ date +%s 00:03:36.174 16:30:24 -- pm/common@21 -- $ date +%s 00:03:36.174 16:30:24 -- pm/common@21 -- $ date +%s 00:03:36.174 16:30:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733499024 00:03:36.174 16:30:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733499024 00:03:36.174 16:30:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733499024 00:03:36.174 16:30:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733499024 00:03:36.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733499024_collect-cpu-load.pm.log 00:03:36.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733499024_collect-vmstat.pm.log 00:03:36.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733499024_collect-cpu-temp.pm.log 00:03:36.174 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733499024_collect-bmc-pm.bmc.pm.log 00:03:37.111 16:30:25 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:37.111 16:30:25 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:37.111 16:30:25 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:37.111 16:30:25 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:37.111 16:30:25 -- spdk/autobuild.sh@16 -- $ date -u 00:03:37.111 Fri Dec 6 03:30:25 PM UTC 2024 00:03:37.111 16:30:25 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:37.111 v25.01-pre-303-ga5e6ecf28 00:03:37.111 16:30:25 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:37.111 16:30:25 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:37.111 16:30:25 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:37.111 16:30:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:37.111 16:30:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:37.111 16:30:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.111 ************************************ 00:03:37.111 START TEST ubsan 00:03:37.111 ************************************ 00:03:37.111 16:30:25 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:37.111 using ubsan 00:03:37.111 00:03:37.111 real 0m0.000s 00:03:37.111 user 0m0.000s 00:03:37.111 sys 0m0.000s 00:03:37.111 16:30:25 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:37.111 16:30:25 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:37.111 ************************************ 00:03:37.111 END TEST ubsan 00:03:37.111 ************************************ 00:03:37.111 16:30:25 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:03:37.111 16:30:25 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:37.111 16:30:25 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:37.111 16:30:25 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:37.111 16:30:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:37.111 16:30:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:37.111 ************************************ 00:03:37.111 START TEST build_native_dpdk 00:03:37.111 ************************************ 00:03:37.111 16:30:25 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:37.111 16:30:25 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:03:37.112 caf0f5d395 version: 22.11.4 00:03:37.112 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:03:37.112 dc9c799c7d vhost: fix missing spinlock unlock 00:03:37.112 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:03:37.112 6ef77f2a5e net/gve: fix RX buffer size alignment 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:37.112 patching file config/rte_config.h 00:03:37.112 Hunk #1 succeeded at 60 (offset 1 line). 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:03:37.112 patching file lib/pcapng/rte_pcapng.c 00:03:37.112 Hunk #1 succeeded at 110 (offset -18 lines). 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:37.112 16:30:25 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:03:37.112 16:30:25 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:40.403 The Meson build system 00:03:40.404 Version: 1.5.0 00:03:40.404 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:03:40.404 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:03:40.404 Build type: native build 00:03:40.404 Program cat found: YES (/usr/bin/cat) 00:03:40.404 Project name: DPDK 00:03:40.404 Project version: 22.11.4 00:03:40.404 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:40.404 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:40.404 Host machine cpu family: x86_64 00:03:40.404 Host machine cpu: x86_64 00:03:40.404 Message: ## Building in Developer Mode ## 00:03:40.404 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:40.404 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:03:40.404 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:03:40.404 Program objdump found: YES (/usr/bin/objdump) 00:03:40.404 Program python3 found: YES (/usr/bin/python3) 00:03:40.404 Program cat found: YES (/usr/bin/cat) 00:03:40.404 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:40.404 Checking for size of "void *" : 8 00:03:40.404 Checking for size of "void *" : 8 (cached) 00:03:40.404 Library m found: YES 00:03:40.404 Library numa found: YES 00:03:40.404 Has header "numaif.h" : YES 00:03:40.404 Library fdt found: NO 00:03:40.404 Library execinfo found: NO 00:03:40.404 Has header "execinfo.h" : YES 00:03:40.404 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:40.404 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:40.404 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:40.404 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:40.404 Run-time dependency openssl found: YES 3.1.1 00:03:40.404 Run-time dependency libpcap found: YES 1.10.4 00:03:40.404 Has header "pcap.h" with dependency libpcap: YES 00:03:40.404 Compiler for C supports arguments -Wcast-qual: YES 00:03:40.404 Compiler for C supports arguments -Wdeprecated: YES 00:03:40.404 Compiler for C supports arguments -Wformat: YES 00:03:40.404 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:40.404 Compiler for C supports arguments -Wformat-security: NO 00:03:40.404 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:40.404 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:40.404 Compiler for C supports arguments -Wnested-externs: YES 00:03:40.404 Compiler for C supports arguments -Wold-style-definition: YES 00:03:40.404 Compiler for C supports arguments -Wpointer-arith: YES 00:03:40.404 Compiler for C supports arguments -Wsign-compare: YES 00:03:40.404 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:40.404 Compiler for C supports arguments -Wundef: YES 00:03:40.404 Compiler for C supports arguments -Wwrite-strings: YES 00:03:40.404 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:40.404 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:40.404 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:40.404 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:40.404 Compiler for C supports arguments -mavx512f: YES 00:03:40.404 Checking if "AVX512 checking" compiles: YES 00:03:40.404 Fetching value of define "__SSE4_2__" : 1 00:03:40.404 Fetching value of define "__AES__" : 1 00:03:40.404 Fetching value of define "__AVX__" : 1 00:03:40.404 Fetching value of define "__AVX2__" : 1 00:03:40.404 Fetching value of define "__AVX512BW__" : 1 00:03:40.404 Fetching value of define "__AVX512CD__" : 1 00:03:40.404 Fetching value of define "__AVX512DQ__" : 1 00:03:40.404 Fetching value of define "__AVX512F__" : 1 00:03:40.404 Fetching value of define "__AVX512VL__" : 1 00:03:40.404 Fetching value of define "__PCLMUL__" : 1 00:03:40.404 Fetching value of define "__RDRND__" : 1 00:03:40.404 Fetching value of define "__RDSEED__" : 1 00:03:40.404 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:40.404 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:40.404 Message: lib/kvargs: Defining dependency "kvargs" 00:03:40.404 Message: lib/telemetry: Defining dependency "telemetry" 00:03:40.404 Checking for function "getentropy" : YES 00:03:40.404 Message: lib/eal: Defining dependency "eal" 00:03:40.404 Message: lib/ring: Defining dependency "ring" 00:03:40.404 Message: lib/rcu: Defining dependency "rcu" 00:03:40.404 Message: lib/mempool: Defining dependency "mempool" 00:03:40.404 Message: lib/mbuf: Defining dependency "mbuf" 00:03:40.404 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:40.404 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:40.404 Compiler for C supports arguments -mpclmul: YES 00:03:40.404 Compiler for C supports arguments -maes: YES 00:03:40.404 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:40.404 Compiler for C supports arguments -mavx512bw: YES 00:03:40.404 Compiler for C supports arguments -mavx512dq: YES 00:03:40.404 Compiler for C supports arguments -mavx512vl: YES 00:03:40.404 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:40.404 Compiler for C supports arguments -mavx2: YES 00:03:40.404 Compiler for C supports arguments -mavx: YES 00:03:40.404 Message: lib/net: Defining dependency "net" 00:03:40.404 Message: lib/meter: Defining dependency "meter" 00:03:40.404 Message: lib/ethdev: Defining dependency "ethdev" 00:03:40.404 Message: lib/pci: Defining dependency "pci" 00:03:40.404 Message: lib/cmdline: Defining dependency "cmdline" 00:03:40.404 Message: lib/metrics: Defining dependency "metrics" 00:03:40.404 Message: lib/hash: Defining dependency "hash" 00:03:40.404 Message: lib/timer: Defining dependency "timer" 00:03:40.404 Fetching value of define "__AVX2__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:40.404 Message: lib/acl: Defining dependency "acl" 00:03:40.404 Message: lib/bbdev: Defining dependency "bbdev" 00:03:40.404 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:40.404 Run-time dependency libelf found: YES 0.191 00:03:40.404 Message: lib/bpf: Defining dependency "bpf" 00:03:40.404 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:40.404 Message: lib/compressdev: Defining dependency "compressdev" 00:03:40.404 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:40.404 Message: lib/distributor: Defining dependency "distributor" 00:03:40.404 Message: lib/efd: Defining dependency "efd" 00:03:40.404 Message: lib/eventdev: Defining dependency "eventdev" 00:03:40.404 Message: lib/gpudev: Defining dependency "gpudev" 00:03:40.404 Message: lib/gro: Defining dependency "gro" 00:03:40.404 Message: lib/gso: Defining dependency "gso" 00:03:40.404 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:40.404 Message: lib/jobstats: Defining dependency "jobstats" 00:03:40.404 Message: lib/latencystats: Defining dependency "latencystats" 00:03:40.404 Message: lib/lpm: Defining dependency "lpm" 00:03:40.404 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512IFMA__" : 1 00:03:40.404 Message: lib/member: Defining dependency "member" 00:03:40.404 Message: lib/pcapng: Defining dependency "pcapng" 00:03:40.404 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:40.404 Message: lib/power: Defining dependency "power" 00:03:40.404 Message: lib/rawdev: Defining dependency "rawdev" 00:03:40.404 Message: lib/regexdev: Defining dependency "regexdev" 00:03:40.404 Message: lib/dmadev: Defining dependency "dmadev" 00:03:40.404 Message: lib/rib: Defining dependency "rib" 00:03:40.404 Message: lib/reorder: Defining dependency "reorder" 00:03:40.404 Message: lib/sched: Defining dependency "sched" 00:03:40.404 Message: lib/security: Defining dependency "security" 00:03:40.404 Message: lib/stack: Defining dependency "stack" 00:03:40.404 Has header "linux/userfaultfd.h" : YES 00:03:40.404 Message: lib/vhost: Defining dependency "vhost" 00:03:40.404 Message: lib/ipsec: Defining dependency "ipsec" 00:03:40.404 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:40.404 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:40.404 Message: lib/fib: Defining dependency "fib" 00:03:40.404 Message: lib/port: Defining dependency "port" 00:03:40.404 Message: lib/pdump: Defining dependency "pdump" 00:03:40.404 Message: lib/table: Defining dependency "table" 00:03:40.404 Message: lib/pipeline: Defining dependency "pipeline" 00:03:40.405 Message: lib/graph: Defining dependency "graph" 00:03:40.405 Message: lib/node: Defining dependency "node" 00:03:40.405 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:40.405 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:40.405 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:40.405 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:40.405 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:40.405 Compiler for C supports arguments -Wno-unused-value: YES 00:03:40.405 Compiler for C supports arguments -Wno-format: YES 00:03:40.405 Compiler for C supports arguments -Wno-format-security: YES 00:03:40.405 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:40.405 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:41.785 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:41.785 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:41.785 Fetching value of define "__AVX2__" : 1 (cached) 00:03:41.785 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:41.785 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:41.785 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:41.785 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:41.785 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:41.785 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:41.785 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:41.785 Configuring doxy-api.conf using configuration 00:03:41.785 Program sphinx-build found: NO 00:03:41.785 Configuring rte_build_config.h using configuration 00:03:41.785 Message: 00:03:41.785 ================= 00:03:41.785 Applications Enabled 00:03:41.785 ================= 00:03:41.785 00:03:41.785 apps: 00:03:41.785 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:41.785 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:41.785 test-security-perf, 00:03:41.785 00:03:41.785 Message: 00:03:41.785 ================= 00:03:41.785 Libraries Enabled 00:03:41.785 ================= 00:03:41.785 00:03:41.785 libs: 00:03:41.785 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:41.785 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:41.785 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:41.785 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:41.785 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:41.785 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:41.785 table, pipeline, graph, node, 00:03:41.785 00:03:41.785 Message: 00:03:41.785 =============== 00:03:41.785 Drivers Enabled 00:03:41.785 =============== 00:03:41.785 00:03:41.785 common: 00:03:41.785 00:03:41.785 bus: 00:03:41.785 pci, vdev, 00:03:41.785 mempool: 00:03:41.785 ring, 00:03:41.785 dma: 00:03:41.785 00:03:41.785 net: 00:03:41.785 i40e, 00:03:41.785 raw: 00:03:41.785 00:03:41.785 crypto: 00:03:41.785 00:03:41.785 compress: 00:03:41.785 00:03:41.785 regex: 00:03:41.785 00:03:41.785 vdpa: 00:03:41.785 00:03:41.785 event: 00:03:41.785 00:03:41.785 baseband: 00:03:41.785 00:03:41.785 gpu: 00:03:41.785 00:03:41.785 00:03:41.785 Message: 00:03:41.785 ================= 00:03:41.785 Content Skipped 00:03:41.785 ================= 00:03:41.785 00:03:41.785 apps: 00:03:41.785 00:03:41.785 libs: 00:03:41.785 kni: explicitly disabled via build config (deprecated lib) 00:03:41.785 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:41.785 00:03:41.785 drivers: 00:03:41.785 common/cpt: not in enabled drivers build config 00:03:41.785 common/dpaax: not in enabled drivers build config 00:03:41.785 common/iavf: not in enabled drivers build config 00:03:41.785 common/idpf: not in enabled drivers build config 00:03:41.785 common/mvep: not in enabled drivers build config 00:03:41.785 common/octeontx: not in enabled drivers build config 00:03:41.785 bus/auxiliary: not in enabled drivers build config 00:03:41.785 bus/dpaa: not in enabled drivers build config 00:03:41.785 bus/fslmc: not in enabled drivers build config 00:03:41.785 bus/ifpga: not in enabled drivers build config 00:03:41.785 bus/vmbus: not in enabled drivers build config 00:03:41.785 common/cnxk: not in enabled drivers build config 00:03:41.785 common/mlx5: not in enabled drivers build config 00:03:41.785 common/qat: not in enabled drivers build config 00:03:41.785 common/sfc_efx: not in enabled drivers build config 00:03:41.785 mempool/bucket: not in enabled drivers build config 00:03:41.785 mempool/cnxk: not in enabled drivers build config 00:03:41.785 mempool/dpaa: not in enabled drivers build config 00:03:41.785 mempool/dpaa2: not in enabled drivers build config 00:03:41.785 mempool/octeontx: not in enabled drivers build config 00:03:41.785 mempool/stack: not in enabled drivers build config 00:03:41.785 dma/cnxk: not in enabled drivers build config 00:03:41.785 dma/dpaa: not in enabled drivers build config 00:03:41.785 dma/dpaa2: not in enabled drivers build config 00:03:41.785 dma/hisilicon: not in enabled drivers build config 00:03:41.785 dma/idxd: not in enabled drivers build config 00:03:41.785 dma/ioat: not in enabled drivers build config 00:03:41.785 dma/skeleton: not in enabled drivers build config 00:03:41.785 net/af_packet: not in enabled drivers build config 00:03:41.785 net/af_xdp: not in enabled drivers build config 00:03:41.785 net/ark: not in enabled drivers build config 00:03:41.785 net/atlantic: not in enabled drivers build config 00:03:41.785 net/avp: not in enabled drivers build config 00:03:41.785 net/axgbe: not in enabled drivers build config 00:03:41.785 net/bnx2x: not in enabled drivers build config 00:03:41.785 net/bnxt: not in enabled drivers build config 00:03:41.785 net/bonding: not in enabled drivers build config 00:03:41.785 net/cnxk: not in enabled drivers build config 00:03:41.785 net/cxgbe: not in enabled drivers build config 00:03:41.785 net/dpaa: not in enabled drivers build config 00:03:41.785 net/dpaa2: not in enabled drivers build config 00:03:41.785 net/e1000: not in enabled drivers build config 00:03:41.785 net/ena: not in enabled drivers build config 00:03:41.785 net/enetc: not in enabled drivers build config 00:03:41.785 net/enetfec: not in enabled drivers build config 00:03:41.785 net/enic: not in enabled drivers build config 00:03:41.785 net/failsafe: not in enabled drivers build config 00:03:41.785 net/fm10k: not in enabled drivers build config 00:03:41.785 net/gve: not in enabled drivers build config 00:03:41.785 net/hinic: not in enabled drivers build config 00:03:41.785 net/hns3: not in enabled drivers build config 00:03:41.785 net/iavf: not in enabled drivers build config 00:03:41.785 net/ice: not in enabled drivers build config 00:03:41.785 net/idpf: not in enabled drivers build config 00:03:41.785 net/igc: not in enabled drivers build config 00:03:41.785 net/ionic: not in enabled drivers build config 00:03:41.785 net/ipn3ke: not in enabled drivers build config 00:03:41.785 net/ixgbe: not in enabled drivers build config 00:03:41.785 net/kni: not in enabled drivers build config 00:03:41.785 net/liquidio: not in enabled drivers build config 00:03:41.785 net/mana: not in enabled drivers build config 00:03:41.785 net/memif: not in enabled drivers build config 00:03:41.785 net/mlx4: not in enabled drivers build config 00:03:41.785 net/mlx5: not in enabled drivers build config 00:03:41.785 net/mvneta: not in enabled drivers build config 00:03:41.785 net/mvpp2: not in enabled drivers build config 00:03:41.785 net/netvsc: not in enabled drivers build config 00:03:41.785 net/nfb: not in enabled drivers build config 00:03:41.785 net/nfp: not in enabled drivers build config 00:03:41.785 net/ngbe: not in enabled drivers build config 00:03:41.785 net/null: not in enabled drivers build config 00:03:41.785 net/octeontx: not in enabled drivers build config 00:03:41.785 net/octeon_ep: not in enabled drivers build config 00:03:41.785 net/pcap: not in enabled drivers build config 00:03:41.785 net/pfe: not in enabled drivers build config 00:03:41.785 net/qede: not in enabled drivers build config 00:03:41.785 net/ring: not in enabled drivers build config 00:03:41.785 net/sfc: not in enabled drivers build config 00:03:41.785 net/softnic: not in enabled drivers build config 00:03:41.785 net/tap: not in enabled drivers build config 00:03:41.785 net/thunderx: not in enabled drivers build config 00:03:41.785 net/txgbe: not in enabled drivers build config 00:03:41.785 net/vdev_netvsc: not in enabled drivers build config 00:03:41.785 net/vhost: not in enabled drivers build config 00:03:41.785 net/virtio: not in enabled drivers build config 00:03:41.785 net/vmxnet3: not in enabled drivers build config 00:03:41.785 raw/cnxk_bphy: not in enabled drivers build config 00:03:41.785 raw/cnxk_gpio: not in enabled drivers build config 00:03:41.785 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:41.785 raw/ifpga: not in enabled drivers build config 00:03:41.785 raw/ntb: not in enabled drivers build config 00:03:41.785 raw/skeleton: not in enabled drivers build config 00:03:41.785 crypto/armv8: not in enabled drivers build config 00:03:41.785 crypto/bcmfs: not in enabled drivers build config 00:03:41.786 crypto/caam_jr: not in enabled drivers build config 00:03:41.786 crypto/ccp: not in enabled drivers build config 00:03:41.786 crypto/cnxk: not in enabled drivers build config 00:03:41.786 crypto/dpaa_sec: not in enabled drivers build config 00:03:41.786 crypto/dpaa2_sec: not in enabled drivers build config 00:03:41.786 crypto/ipsec_mb: not in enabled drivers build config 00:03:41.786 crypto/mlx5: not in enabled drivers build config 00:03:41.786 crypto/mvsam: not in enabled drivers build config 00:03:41.786 crypto/nitrox: not in enabled drivers build config 00:03:41.786 crypto/null: not in enabled drivers build config 00:03:41.786 crypto/octeontx: not in enabled drivers build config 00:03:41.786 crypto/openssl: not in enabled drivers build config 00:03:41.786 crypto/scheduler: not in enabled drivers build config 00:03:41.786 crypto/uadk: not in enabled drivers build config 00:03:41.786 crypto/virtio: not in enabled drivers build config 00:03:41.786 compress/isal: not in enabled drivers build config 00:03:41.786 compress/mlx5: not in enabled drivers build config 00:03:41.786 compress/octeontx: not in enabled drivers build config 00:03:41.786 compress/zlib: not in enabled drivers build config 00:03:41.786 regex/mlx5: not in enabled drivers build config 00:03:41.786 regex/cn9k: not in enabled drivers build config 00:03:41.786 vdpa/ifc: not in enabled drivers build config 00:03:41.786 vdpa/mlx5: not in enabled drivers build config 00:03:41.786 vdpa/sfc: not in enabled drivers build config 00:03:41.786 event/cnxk: not in enabled drivers build config 00:03:41.786 event/dlb2: not in enabled drivers build config 00:03:41.786 event/dpaa: not in enabled drivers build config 00:03:41.786 event/dpaa2: not in enabled drivers build config 00:03:41.786 event/dsw: not in enabled drivers build config 00:03:41.786 event/opdl: not in enabled drivers build config 00:03:41.786 event/skeleton: not in enabled drivers build config 00:03:41.786 event/sw: not in enabled drivers build config 00:03:41.786 event/octeontx: not in enabled drivers build config 00:03:41.786 baseband/acc: not in enabled drivers build config 00:03:41.786 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:41.786 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:41.786 baseband/la12xx: not in enabled drivers build config 00:03:41.786 baseband/null: not in enabled drivers build config 00:03:41.786 baseband/turbo_sw: not in enabled drivers build config 00:03:41.786 gpu/cuda: not in enabled drivers build config 00:03:41.786 00:03:41.786 00:03:41.786 Build targets in project: 309 00:03:41.786 00:03:41.786 DPDK 22.11.4 00:03:41.786 00:03:41.786 User defined options 00:03:41.786 libdir : lib 00:03:41.786 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:03:41.786 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:41.786 c_link_args : 00:03:41.786 enable_docs : false 00:03:41.786 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:41.786 enable_kmods : false 00:03:41.786 machine : native 00:03:41.786 tests : false 00:03:41.786 00:03:41.786 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:41.786 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:42.057 16:30:30 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:03:42.057 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:42.057 [1/738] Generating lib/rte_kvargs_def with a custom command 00:03:42.057 [2/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:42.057 [3/738] Generating lib/rte_telemetry_def with a custom command 00:03:42.057 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:42.057 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:42.057 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:42.057 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:42.057 [8/738] Generating lib/rte_ring_def with a custom command 00:03:42.057 [9/738] Generating lib/rte_rcu_def with a custom command 00:03:42.057 [10/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:42.057 [11/738] Generating lib/rte_ring_mingw with a custom command 00:03:42.057 [12/738] Generating lib/rte_mempool_mingw with a custom command 00:03:42.057 [13/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:42.318 [14/738] Generating lib/rte_eal_mingw with a custom command 00:03:42.318 [15/738] Generating lib/rte_mempool_def with a custom command 00:03:42.318 [16/738] Generating lib/rte_eal_def with a custom command 00:03:42.318 [17/738] Generating lib/rte_rcu_mingw with a custom command 00:03:42.318 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:42.318 [19/738] Generating lib/rte_mbuf_def with a custom command 00:03:42.318 [20/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:42.318 [21/738] Generating lib/rte_ethdev_def with a custom command 00:03:42.318 [22/738] Generating lib/rte_net_mingw with a custom command 00:03:42.318 [23/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:42.318 [24/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:42.318 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:42.318 [26/738] Generating lib/rte_meter_def with a custom command 00:03:42.318 [27/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:42.318 [28/738] Generating lib/rte_meter_mingw with a custom command 00:03:42.318 [29/738] Generating lib/rte_net_def with a custom command 00:03:42.318 [30/738] Generating lib/rte_pci_mingw with a custom command 00:03:42.318 [31/738] Generating lib/rte_pci_def with a custom command 00:03:42.318 [32/738] Generating lib/rte_timer_def with a custom command 00:03:42.318 [33/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:42.318 [34/738] Generating lib/rte_metrics_mingw with a custom command 00:03:42.318 [35/738] Generating lib/rte_hash_def with a custom command 00:03:42.318 [36/738] Generating lib/rte_cmdline_def with a custom command 00:03:42.318 [37/738] Generating lib/rte_metrics_def with a custom command 00:03:42.318 [38/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:42.318 [39/738] Linking static target lib/librte_kvargs.a 00:03:42.318 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:42.318 [41/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:42.318 [42/738] Generating lib/rte_hash_mingw with a custom command 00:03:42.318 [43/738] Generating lib/rte_timer_mingw with a custom command 00:03:42.318 [44/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:42.318 [45/738] Generating lib/rte_acl_def with a custom command 00:03:42.318 [46/738] Generating lib/rte_bbdev_def with a custom command 00:03:42.318 [47/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:42.318 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:42.318 [49/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:42.318 [50/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:42.318 [51/738] Generating lib/rte_acl_mingw with a custom command 00:03:42.318 [52/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:42.318 [53/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:42.318 [54/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:42.318 [55/738] Generating lib/rte_bitratestats_def with a custom command 00:03:42.318 [56/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:42.318 [57/738] Generating lib/rte_bpf_def with a custom command 00:03:42.318 [58/738] Generating lib/rte_bpf_mingw with a custom command 00:03:42.318 [59/738] Generating lib/rte_cfgfile_def with a custom command 00:03:42.318 [60/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:42.318 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:42.318 [62/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:42.318 [63/738] Generating lib/rte_compressdev_def with a custom command 00:03:42.318 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:42.318 [65/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:42.318 [66/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:42.318 [67/738] Generating lib/rte_cryptodev_def with a custom command 00:03:42.319 [68/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:42.319 [69/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:42.319 [70/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:42.319 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:42.319 [72/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:42.319 [73/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:42.319 [74/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:42.319 [75/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:42.319 [76/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:42.319 [77/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:42.319 [78/738] Generating lib/rte_distributor_def with a custom command 00:03:42.319 [79/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:42.319 [80/738] Generating lib/rte_distributor_mingw with a custom command 00:03:42.319 [81/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:42.319 [82/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:42.319 [83/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:42.319 [84/738] Generating lib/rte_efd_def with a custom command 00:03:42.319 [85/738] Generating lib/rte_efd_mingw with a custom command 00:03:42.319 [86/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:42.319 [87/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:42.319 [88/738] Generating lib/rte_eventdev_def with a custom command 00:03:42.319 [89/738] Generating lib/rte_gpudev_def with a custom command 00:03:42.319 [90/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:42.319 [91/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:42.319 [92/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:42.319 [93/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:42.319 [94/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:42.319 [95/738] Linking static target lib/librte_pci.a 00:03:42.319 [96/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:42.319 [97/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:42.319 [98/738] Generating lib/rte_gro_mingw with a custom command 00:03:42.578 [99/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:42.578 [100/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:42.578 [101/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:42.578 [102/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:42.578 [103/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:42.578 [104/738] Generating lib/rte_gro_def with a custom command 00:03:42.578 [105/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:42.578 [106/738] Generating lib/rte_gso_def with a custom command 00:03:42.578 [107/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:42.578 [108/738] Generating lib/rte_gso_mingw with a custom command 00:03:42.578 [109/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:42.578 [110/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:42.578 [111/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:42.578 [112/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:42.578 [113/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:42.578 [114/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:42.578 [115/738] Generating lib/rte_ip_frag_def with a custom command 00:03:42.578 [116/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:42.578 [117/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:42.578 [118/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:42.578 [119/738] Generating lib/rte_jobstats_def with a custom command 00:03:42.578 [120/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:42.578 [121/738] Generating lib/rte_lpm_def with a custom command 00:03:42.578 [122/738] Generating lib/rte_latencystats_def with a custom command 00:03:42.578 [123/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:42.578 [124/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:42.578 [125/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:42.578 [126/738] Generating lib/rte_lpm_mingw with a custom command 00:03:42.578 [127/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:42.578 [128/738] Linking static target lib/librte_ring.a 00:03:42.578 [129/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:42.578 [130/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:42.578 [131/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:42.578 [132/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:42.578 [133/738] Generating lib/rte_member_mingw with a custom command 00:03:42.578 [134/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:42.578 [135/738] Generating lib/rte_member_def with a custom command 00:03:42.578 [136/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:42.578 [137/738] Generating lib/rte_pcapng_def with a custom command 00:03:42.578 [138/738] Linking static target lib/librte_meter.a 00:03:42.578 [139/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:42.578 [140/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:42.578 [141/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:42.578 [142/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:42.578 [143/738] Generating lib/rte_rawdev_def with a custom command 00:03:42.578 [144/738] Generating lib/rte_power_mingw with a custom command 00:03:42.578 [145/738] Generating lib/rte_power_def with a custom command 00:03:42.578 [146/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:42.578 [147/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:42.578 [148/738] Generating lib/rte_dmadev_def with a custom command 00:03:42.578 [149/738] Generating lib/rte_regexdev_def with a custom command 00:03:42.578 [150/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:42.578 [151/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:42.578 [152/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:42.578 [153/738] Generating lib/rte_rib_def with a custom command 00:03:42.578 [154/738] Generating lib/rte_rib_mingw with a custom command 00:03:42.578 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:42.844 [156/738] Generating lib/rte_reorder_mingw with a custom command 00:03:42.844 [157/738] Generating lib/rte_reorder_def with a custom command 00:03:42.844 [158/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:42.844 [159/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:42.844 [160/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:42.844 [161/738] Linking static target lib/librte_jobstats.a 00:03:42.844 [162/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:42.844 [163/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:42.844 [164/738] Generating lib/rte_sched_def with a custom command 00:03:42.844 [165/738] Linking static target lib/librte_cfgfile.a 00:03:42.844 [166/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:42.844 [167/738] Generating lib/rte_sched_mingw with a custom command 00:03:42.844 [168/738] Generating lib/rte_security_def with a custom command 00:03:42.844 [169/738] Generating lib/rte_security_mingw with a custom command 00:03:42.844 [170/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:42.844 [171/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.844 [172/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:42.844 [173/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.844 [174/738] Generating lib/rte_stack_def with a custom command 00:03:42.844 [175/738] Generating lib/rte_stack_mingw with a custom command 00:03:42.844 [176/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:42.844 [177/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:42.844 [178/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:42.844 [179/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:42.844 [180/738] Linking target lib/librte_kvargs.so.23.0 00:03:42.844 [181/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:42.844 [182/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:42.844 [183/738] Generating lib/rte_vhost_def with a custom command 00:03:42.844 [184/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:42.844 [185/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:42.844 [186/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:42.844 [187/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:42.844 [188/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:42.844 [189/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:42.844 [190/738] Generating lib/rte_vhost_mingw with a custom command 00:03:42.844 [191/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:42.844 [192/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:42.844 [193/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:42.844 [194/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:42.844 [195/738] Generating lib/rte_ipsec_def with a custom command 00:03:42.844 [196/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:42.844 [197/738] Linking static target lib/librte_stack.a 00:03:42.844 [198/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:42.844 [199/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:42.844 [200/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:42.844 [201/738] Linking static target lib/librte_telemetry.a 00:03:42.844 [202/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:42.844 [203/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:42.844 [204/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:42.844 [205/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.108 [206/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:43.108 [207/738] Linking static target lib/librte_timer.a 00:03:43.108 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:43.108 [209/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:43.108 [210/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:43.108 [211/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:43.108 [212/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:43.108 [213/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.108 [214/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:43.108 [215/738] Generating lib/rte_fib_mingw with a custom command 00:03:43.108 [216/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:43.108 [217/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:43.108 [218/738] Generating lib/rte_fib_def with a custom command 00:03:43.108 [219/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:43.108 [220/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:43.108 [221/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:43.108 [222/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:43.108 [223/738] Linking static target lib/librte_cmdline.a 00:03:43.108 [224/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:43.108 [225/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:43.108 [226/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:43.108 [227/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:43.108 [228/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:43.108 [229/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:43.108 [230/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:43.108 [231/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:43.108 [232/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:43.108 [233/738] Linking static target lib/librte_metrics.a 00:03:43.108 [234/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:43.108 [235/738] Generating lib/rte_port_mingw with a custom command 00:03:43.108 [236/738] Generating lib/rte_port_def with a custom command 00:03:43.108 [237/738] Generating lib/rte_pdump_mingw with a custom command 00:03:43.108 [238/738] Generating lib/rte_pdump_def with a custom command 00:03:43.108 [239/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:43.108 [240/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:43.108 [241/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:43.108 [242/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:43.108 [243/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:43.108 [244/738] Linking static target lib/librte_bitratestats.a 00:03:43.108 [245/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:43.108 [246/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:43.108 [247/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:43.108 [248/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:43.108 [249/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:43.108 [250/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:43.108 [251/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:43.108 [252/738] Linking static target lib/librte_rawdev.a 00:03:43.108 [253/738] Linking static target lib/librte_net.a 00:03:43.108 [254/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:43.108 [255/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:43.108 [256/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:43.108 [257/738] Generating lib/rte_table_def with a custom command 00:03:43.108 [258/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:43.108 [259/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:43.108 [260/738] Generating lib/rte_table_mingw with a custom command 00:03:43.108 [261/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:43.108 [262/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:43.108 [263/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:43.108 [264/738] Linking static target lib/librte_dmadev.a 00:03:43.108 [265/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:43.108 [266/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:43.108 [267/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:43.108 [268/738] Generating lib/rte_pipeline_def with a custom command 00:03:43.108 [269/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:43.108 [270/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:43.108 [271/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:43.108 [272/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:43.108 [273/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:43.108 [274/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:43.108 [275/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:43.108 [276/738] Linking static target lib/librte_compressdev.a 00:03:43.369 [277/738] Generating lib/rte_graph_def with a custom command 00:03:43.369 [278/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:43.369 [279/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:43.369 [280/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:43.369 [281/738] Generating lib/rte_graph_mingw with a custom command 00:03:43.369 [282/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:43.369 [283/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:43.369 [284/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:43.369 [285/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:43.369 [286/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.369 [287/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.369 [288/738] Generating lib/rte_node_def with a custom command 00:03:43.369 [289/738] Generating lib/rte_node_mingw with a custom command 00:03:43.369 [290/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:43.369 [291/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:43.369 [292/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:43.369 [293/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:43.369 [294/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:43.369 [295/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:43.369 [296/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:43.369 [297/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:43.369 [298/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:43.369 [299/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:43.369 [300/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:43.369 [301/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:43.369 [302/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:43.369 [303/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:43.369 [304/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:43.369 [305/738] Linking static target lib/librte_latencystats.a 00:03:43.369 [306/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:43.369 [307/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:43.369 [308/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:43.369 [309/738] Linking static target lib/librte_rcu.a 00:03:43.369 [310/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:43.369 [311/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:43.369 [312/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:43.369 [313/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:43.369 [314/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.369 [315/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:43.369 [316/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:43.369 [317/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:43.369 [318/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:43.369 [319/738] Linking static target lib/librte_gpudev.a 00:03:43.369 [320/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:43.369 [321/738] Linking static target lib/librte_bbdev.a 00:03:43.369 [322/738] Linking static target lib/librte_regexdev.a 00:03:43.369 [323/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:43.369 [324/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:43.369 [325/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.369 [326/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:43.369 [327/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:43.369 [328/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:43.369 [329/738] Linking static target lib/librte_gso.a 00:03:43.369 [330/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:43.631 [331/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:43.631 [332/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:43.631 [333/738] Linking static target lib/librte_gro.a 00:03:43.631 [334/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:43.631 [335/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:43.631 [336/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:43.631 [337/738] Linking static target lib/librte_mempool.a 00:03:43.631 [338/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:43.631 [339/738] Linking static target lib/librte_reorder.a 00:03:43.631 [340/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:43.631 [341/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:43.631 [342/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:43.631 [343/738] Linking static target lib/librte_distributor.a 00:03:43.631 [344/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.631 [345/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:43.631 [346/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:43.631 [347/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.631 [348/738] Linking static target lib/librte_power.a 00:03:43.631 [349/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:43.631 [350/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.631 [351/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:43.631 [352/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:43.631 [353/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:43.631 [354/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:43.631 [355/738] Linking static target lib/librte_ip_frag.a 00:03:43.631 [356/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:43.631 [357/738] Linking target lib/librte_telemetry.so.23.0 00:03:43.631 [358/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:43.631 [359/738] Linking static target lib/librte_security.a 00:03:43.631 [360/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:43.631 [361/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:43.631 [362/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:43.631 [363/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:43.631 [364/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:43.631 [365/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:43.631 [366/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:43.631 [367/738] Linking static target lib/librte_pcapng.a 00:03:43.631 [368/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:43.631 [369/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:43.631 [370/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:43.631 [371/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.631 [372/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:43.631 [373/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:43.631 [374/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.894 [375/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:43.894 [376/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:43.894 [377/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:43.894 [378/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:43.894 [379/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:43.894 [380/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.894 [381/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:43.894 [382/738] Linking static target lib/librte_eal.a 00:03:43.894 [383/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:43.894 [384/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:43.894 [385/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:43.894 [386/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:43.894 [387/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:43.894 [388/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:43.894 [389/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:43.894 [390/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:43.894 [391/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:43.894 [392/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:43.894 [393/738] Linking static target lib/librte_rib.a 00:03:43.894 [394/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.895 [395/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:43.895 [396/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:43.895 [397/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:43.895 [398/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:43.895 [399/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.895 [400/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.895 [401/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.895 [402/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:43.895 [403/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:43.895 [404/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:43.895 [405/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:43.895 [406/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:43.895 [407/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:43.895 [408/738] Linking static target lib/librte_lpm.a 00:03:43.895 [409/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:43.895 [410/738] Linking static target lib/librte_bpf.a 00:03:43.895 [411/738] Linking static target lib/librte_graph.a 00:03:43.895 [412/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:43.895 [413/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:43.895 [414/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.895 [415/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:44.159 [416/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:44.159 [417/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:44.159 [418/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:44.159 [419/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:44.159 [420/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:44.159 [421/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:44.159 [422/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:44.159 [423/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:44.159 [424/738] Linking static target lib/librte_mbuf.a 00:03:44.159 [425/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:44.159 [426/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:44.159 [427/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:44.159 [428/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:44.159 [429/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:44.159 [430/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:44.159 [431/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.159 [432/738] Linking static target drivers/librte_bus_vdev.a 00:03:44.159 [433/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:44.159 [434/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.159 [435/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:44.159 [436/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:44.159 [437/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:44.159 [438/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:44.159 [439/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:44.159 [440/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:44.159 [441/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:44.159 [442/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:44.159 [443/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.159 [444/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:44.159 [445/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:44.417 [446/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:44.417 [447/738] Linking static target lib/librte_efd.a 00:03:44.417 [448/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:44.417 [449/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:44.417 [450/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:44.417 [451/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:44.417 [452/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:44.417 [453/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:44.417 [454/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:44.417 [455/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:44.417 [456/738] Linking static target lib/librte_fib.a 00:03:44.417 [457/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:44.417 [458/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:44.417 [459/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:44.417 [460/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:44.417 [461/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:44.417 [462/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.417 [463/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:44.417 [464/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.417 [465/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:44.417 [466/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:44.417 [467/738] Linking static target drivers/librte_bus_pci.a 00:03:44.417 [468/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:44.417 [469/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.417 [470/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.417 [471/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:44.417 [472/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.417 [473/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:44.417 [474/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:44.417 [475/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:44.417 [476/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:44.417 [477/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:44.417 [478/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:44.417 [479/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:44.417 [480/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:44.677 [481/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:44.677 [482/738] Linking static target lib/librte_pdump.a 00:03:44.677 [483/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [484/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:44.677 [485/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [486/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [487/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:44.677 [488/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:44.677 [489/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [490/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:44.677 [491/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:44.677 [492/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:44.677 [493/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:44.677 [494/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:44.677 [495/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:44.677 [496/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:44.677 [497/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:44.677 [498/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:44.677 [499/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [500/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:44.677 [501/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:44.677 [502/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:44.677 [503/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:44.677 [504/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [505/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [506/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [507/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:44.677 [508/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:44.677 [509/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:44.677 [510/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:44.677 [511/738] Linking static target lib/librte_table.a 00:03:44.677 [512/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:44.677 [513/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:44.677 [514/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.677 [515/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:44.677 [516/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:44.677 [517/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:44.677 [518/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:44.677 [519/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:44.677 [520/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:44.677 [521/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:44.677 [522/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:44.935 [523/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:44.935 [524/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:44.935 [525/738] Linking static target lib/librte_sched.a 00:03:44.935 [526/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:44.935 [527/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.935 [528/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.935 [529/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:44.935 [530/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:44.935 [531/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:44.936 [532/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.936 [533/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:44.936 [534/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:44.936 [535/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:44.936 [536/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:44.936 [537/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:44.936 [538/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:44.936 [539/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:44.936 [540/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:44.936 [541/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:44.936 [542/738] Linking static target drivers/librte_mempool_ring.a 00:03:44.936 [543/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:44.936 [544/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:44.936 [545/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:44.936 [546/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:44.936 [547/738] Linking static target lib/librte_node.a 00:03:44.936 [548/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:44.936 [549/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:44.936 [550/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:44.936 [551/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:44.936 [552/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.936 [553/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:44.936 [554/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:44.936 [555/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:44.936 [556/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:44.936 [557/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:44.936 [558/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:44.936 [559/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:44.936 [560/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:44.936 [561/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:44.936 [562/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:44.936 [563/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:44.936 [564/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:44.936 [565/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:44.936 [566/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:44.936 [567/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:45.195 [568/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:45.195 [569/738] Linking static target lib/librte_member.a 00:03:45.196 [570/738] Linking static target lib/librte_ipsec.a 00:03:45.196 [571/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:45.196 [572/738] Linking static target lib/librte_cryptodev.a 00:03:45.196 [573/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:45.196 [574/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:45.196 [575/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:45.196 [576/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.196 [577/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:45.196 [578/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:45.196 [579/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.196 [580/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:45.196 [581/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:45.196 [582/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:45.196 [583/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:45.196 [584/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:45.196 [585/738] Linking static target lib/librte_port.a 00:03:45.196 [586/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:45.196 [587/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:45.196 [588/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:45.196 [589/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.196 [590/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:45.455 [591/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:45.455 [592/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:45.455 [593/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:45.455 [594/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:45.455 [595/738] Linking static target lib/librte_hash.a 00:03:45.455 [596/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.455 [597/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:45.455 [598/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.455 [599/738] Linking static target lib/librte_eventdev.a 00:03:45.455 [600/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:45.455 [601/738] Linking static target lib/librte_ethdev.a 00:03:45.455 [602/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:45.455 [603/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:45.455 [604/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:45.455 [605/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:45.455 [606/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:45.455 [607/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:45.455 [608/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:45.713 [609/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:45.713 [610/738] Linking static target lib/librte_acl.a 00:03:45.713 [611/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.713 [612/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:45.971 [613/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.971 [614/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.971 [615/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:45.971 [616/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:46.540 [617/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:46.540 [618/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:46.799 [619/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:46.799 [620/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.058 [621/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.058 [622/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:47.058 [623/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:47.315 [624/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:47.315 [625/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:47.315 [626/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:47.315 [627/738] Linking static target drivers/librte_net_i40e.a 00:03:47.573 [628/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:47.832 [629/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:47.832 [630/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.089 [631/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:49.994 [632/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.562 [633/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.562 [634/738] Linking target lib/librte_eal.so.23.0 00:03:50.562 [635/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:50.562 [636/738] Linking target lib/librte_ring.so.23.0 00:03:50.562 [637/738] Linking target lib/librte_meter.so.23.0 00:03:50.562 [638/738] Linking target lib/librte_timer.so.23.0 00:03:50.562 [639/738] Linking target lib/librte_pci.so.23.0 00:03:50.562 [640/738] Linking target lib/librte_cfgfile.so.23.0 00:03:50.562 [641/738] Linking target lib/librte_stack.so.23.0 00:03:50.562 [642/738] Linking target lib/librte_jobstats.so.23.0 00:03:50.562 [643/738] Linking target lib/librte_dmadev.so.23.0 00:03:50.562 [644/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:50.562 [645/738] Linking target lib/librte_graph.so.23.0 00:03:50.562 [646/738] Linking target lib/librte_rawdev.so.23.0 00:03:50.562 [647/738] Linking target lib/librte_acl.so.23.0 00:03:50.562 [648/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:50.562 [649/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:50.562 [650/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:50.563 [651/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:50.563 [652/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:50.563 [653/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:50.563 [654/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:50.563 [655/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:50.822 [656/738] Linking target lib/librte_rcu.so.23.0 00:03:50.822 [657/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:50.822 [658/738] Linking target lib/librte_mempool.so.23.0 00:03:50.822 [659/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:50.822 [660/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:50.822 [661/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:50.822 [662/738] Linking target lib/librte_rib.so.23.0 00:03:50.822 [663/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:50.822 [664/738] Linking target lib/librte_mbuf.so.23.0 00:03:50.822 [665/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:50.822 [666/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:50.822 [667/738] Linking target lib/librte_net.so.23.0 00:03:50.822 [668/738] Linking target lib/librte_gpudev.so.23.0 00:03:50.822 [669/738] Linking target lib/librte_bbdev.so.23.0 00:03:50.822 [670/738] Linking target lib/librte_compressdev.so.23.0 00:03:50.822 [671/738] Linking target lib/librte_reorder.so.23.0 00:03:50.822 [672/738] Linking target lib/librte_regexdev.so.23.0 00:03:50.822 [673/738] Linking target lib/librte_cryptodev.so.23.0 00:03:50.822 [674/738] Linking target lib/librte_sched.so.23.0 00:03:50.822 [675/738] Linking target lib/librte_distributor.so.23.0 00:03:50.822 [676/738] Linking target lib/librte_fib.so.23.0 00:03:51.082 [677/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:51.083 [678/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:51.083 [679/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:51.083 [680/738] Linking target lib/librte_security.so.23.0 00:03:51.083 [681/738] Linking target lib/librte_hash.so.23.0 00:03:51.083 [682/738] Linking target lib/librte_cmdline.so.23.0 00:03:51.083 [683/738] Linking target lib/librte_ethdev.so.23.0 00:03:51.083 [684/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:51.083 [685/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:51.083 [686/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:51.083 [687/738] Linking target lib/librte_lpm.so.23.0 00:03:51.083 [688/738] Linking target lib/librte_efd.so.23.0 00:03:51.083 [689/738] Linking target lib/librte_member.so.23.0 00:03:51.083 [690/738] Linking target lib/librte_ipsec.so.23.0 00:03:51.083 [691/738] Linking target lib/librte_metrics.so.23.0 00:03:51.083 [692/738] Linking target lib/librte_gso.so.23.0 00:03:51.083 [693/738] Linking target lib/librte_pcapng.so.23.0 00:03:51.083 [694/738] Linking target lib/librte_gro.so.23.0 00:03:51.083 [695/738] Linking target lib/librte_bpf.so.23.0 00:03:51.083 [696/738] Linking target lib/librte_ip_frag.so.23.0 00:03:51.083 [697/738] Linking target lib/librte_power.so.23.0 00:03:51.083 [698/738] Linking target lib/librte_eventdev.so.23.0 00:03:51.083 [699/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:51.342 [700/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:51.342 [701/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:51.342 [702/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:51.342 [703/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:51.342 [704/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:51.342 [705/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:51.342 [706/738] Linking target lib/librte_node.so.23.0 00:03:51.342 [707/738] Linking target lib/librte_bitratestats.so.23.0 00:03:51.343 [708/738] Linking target lib/librte_latencystats.so.23.0 00:03:51.343 [709/738] Linking target lib/librte_pdump.so.23.0 00:03:51.343 [710/738] Linking target lib/librte_port.so.23.0 00:03:51.343 [711/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:51.343 [712/738] Linking target lib/librte_table.so.23.0 00:03:51.602 [713/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:52.541 [714/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:52.541 [715/738] Linking static target lib/librte_pipeline.a 00:03:52.800 [716/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:52.800 [717/738] Linking static target lib/librte_vhost.a 00:03:53.058 [718/738] Linking target app/dpdk-dumpcap 00:03:53.058 [719/738] Linking target app/dpdk-pdump 00:03:53.058 [720/738] Linking target app/dpdk-proc-info 00:03:53.058 [721/738] Linking target app/dpdk-test-cmdline 00:03:53.058 [722/738] Linking target app/dpdk-test-acl 00:03:53.058 [723/738] Linking target app/dpdk-test-bbdev 00:03:53.058 [724/738] Linking target app/dpdk-test-regex 00:03:53.058 [725/738] Linking target app/dpdk-test-fib 00:03:53.058 [726/738] Linking target app/dpdk-test-crypto-perf 00:03:53.316 [727/738] Linking target app/dpdk-test-sad 00:03:53.316 [728/738] Linking target app/dpdk-test-flow-perf 00:03:53.316 [729/738] Linking target app/dpdk-test-compress-perf 00:03:53.316 [730/738] Linking target app/dpdk-test-gpudev 00:03:53.316 [731/738] Linking target app/dpdk-test-security-perf 00:03:53.316 [732/738] Linking target app/dpdk-test-pipeline 00:03:53.316 [733/738] Linking target app/dpdk-test-eventdev 00:03:53.316 [734/738] Linking target app/dpdk-testpmd 00:03:54.252 [735/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.252 [736/738] Linking target lib/librte_vhost.so.23.0 00:03:55.190 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.190 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:55.190 16:30:43 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:55.190 16:30:43 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:55.191 16:30:43 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:03:55.191 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:55.191 [0/1] Installing files. 00:03:55.459 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.459 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.460 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:55.461 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.462 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.463 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.464 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:55.465 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:55.465 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.465 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.813 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.813 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.813 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.813 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:55.813 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.813 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.814 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.815 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:55.816 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:55.816 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:55.816 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:55.816 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:55.816 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:55.816 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:55.816 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:55.816 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:55.816 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:55.816 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:55.816 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:55.816 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:55.816 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:55.816 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:55.816 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:55.816 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:55.816 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:55.816 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:55.816 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:55.816 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:55.816 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:55.816 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:55.816 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:55.816 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:55.816 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:55.816 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:55.816 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:55.816 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:55.816 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:55.816 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:55.816 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:55.816 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:55.816 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:55.816 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:55.816 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:55.816 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:55.816 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:55.816 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:55.816 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:55.816 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:55.816 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:55.816 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:55.816 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:55.816 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:55.816 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:55.816 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:55.816 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:55.816 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:55.816 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:55.816 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:55.816 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:55.816 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:55.816 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:55.816 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:55.816 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:55.816 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:55.816 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:55.816 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:55.817 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:55.817 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:55.817 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:55.817 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:55.817 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:55.817 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:55.817 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:55.817 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:55.817 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:55.817 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:55.817 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:55.817 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:55.817 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:55.817 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:55.817 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:55.817 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:55.817 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:55.817 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:55.817 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:55.817 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:55.817 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:55.817 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:55.817 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:55.817 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:55.817 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:55.817 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:55.817 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:55.817 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:55.817 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:55.817 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:55.817 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:55.817 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:55.817 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:55.817 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:55.817 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:55.817 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:55.817 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:55.817 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:55.817 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:55.817 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:55.817 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:55.817 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:55.817 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:55.817 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:55.817 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:55.817 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:55.817 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:55.817 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:55.817 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:55.817 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:55.817 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:55.817 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:55.817 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:55.817 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:55.817 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:55.817 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:55.817 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:55.817 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:55.817 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:55.817 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:55.817 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:55.817 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:55.817 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:55.817 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:55.817 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:55.817 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:55.817 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:55.817 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:55.817 16:30:44 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:55.817 16:30:44 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.817 00:03:55.817 real 0m18.517s 00:03:55.817 user 5m23.209s 00:03:55.817 sys 2m20.646s 00:03:55.817 16:30:44 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:55.817 16:30:44 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:55.817 ************************************ 00:03:55.817 END TEST build_native_dpdk 00:03:55.817 ************************************ 00:03:55.817 16:30:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:55.817 16:30:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:55.817 16:30:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:55.817 16:30:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:55.817 16:30:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:55.817 16:30:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:55.817 16:30:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:55.817 16:30:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:55.817 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:55.817 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.817 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:55.817 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:56.121 Using 'verbs' RDMA provider 00:04:04.494 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:14.472 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:14.472 Creating mk/config.mk...done. 00:04:14.472 Creating mk/cc.flags.mk...done. 00:04:14.472 Type 'make' to build. 00:04:14.472 16:31:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:04:14.472 16:31:03 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:14.472 16:31:03 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:14.472 16:31:03 -- common/autotest_common.sh@10 -- $ set +x 00:04:14.472 ************************************ 00:04:14.472 START TEST make 00:04:14.472 ************************************ 00:04:14.472 16:31:03 make -- common/autotest_common.sh@1129 -- $ make -j144 00:04:14.732 make[1]: Nothing to be done for 'all'. 00:04:16.112 The Meson build system 00:04:16.112 Version: 1.5.0 00:04:16.112 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:16.112 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:16.112 Build type: native build 00:04:16.112 Project name: libvfio-user 00:04:16.112 Project version: 0.0.1 00:04:16.112 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:16.112 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:16.112 Host machine cpu family: x86_64 00:04:16.112 Host machine cpu: x86_64 00:04:16.112 Run-time dependency threads found: YES 00:04:16.112 Library dl found: YES 00:04:16.112 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:16.112 Run-time dependency json-c found: YES 0.17 00:04:16.112 Run-time dependency cmocka found: YES 1.1.7 00:04:16.112 Program pytest-3 found: NO 00:04:16.112 Program flake8 found: NO 00:04:16.112 Program misspell-fixer found: NO 00:04:16.112 Program restructuredtext-lint found: NO 00:04:16.112 Program valgrind found: YES (/usr/bin/valgrind) 00:04:16.112 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:16.112 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:16.112 Compiler for C supports arguments -Wwrite-strings: YES 00:04:16.112 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:16.112 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:16.112 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:16.112 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:16.112 Build targets in project: 8 00:04:16.112 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:16.112 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:16.112 00:04:16.112 libvfio-user 0.0.1 00:04:16.112 00:04:16.112 User defined options 00:04:16.112 buildtype : debug 00:04:16.112 default_library: shared 00:04:16.112 libdir : /usr/local/lib 00:04:16.112 00:04:16.112 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:16.370 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:16.370 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:16.370 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:16.370 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:16.370 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:16.370 [5/37] Compiling C object samples/null.p/null.c.o 00:04:16.370 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:16.370 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:16.370 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:16.370 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:16.370 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:16.370 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:16.370 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:16.370 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:16.370 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:16.370 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:16.370 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:16.370 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:16.370 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:16.370 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:16.370 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:16.370 [21/37] Compiling C object samples/server.p/server.c.o 00:04:16.370 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:16.370 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:16.370 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:16.370 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:16.370 [26/37] Compiling C object samples/client.p/client.c.o 00:04:16.628 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:16.628 [28/37] Linking target samples/client 00:04:16.628 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:16.628 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:16.628 [31/37] Linking target test/unit_tests 00:04:16.628 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:16.628 [33/37] Linking target samples/server 00:04:16.628 [34/37] Linking target samples/gpio-pci-idio-16 00:04:16.628 [35/37] Linking target samples/lspci 00:04:16.628 [36/37] Linking target samples/shadow_ioeventfd_server 00:04:16.628 [37/37] Linking target samples/null 00:04:16.628 INFO: autodetecting backend as ninja 00:04:16.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:16.628 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:16.885 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:16.885 ninja: no work to do. 00:04:38.803 CC lib/ut_mock/mock.o 00:04:38.803 CC lib/log/log.o 00:04:38.803 CC lib/log/log_flags.o 00:04:38.803 CC lib/ut/ut.o 00:04:38.803 CC lib/log/log_deprecated.o 00:04:38.803 LIB libspdk_ut_mock.a 00:04:38.803 LIB libspdk_ut.a 00:04:38.803 LIB libspdk_log.a 00:04:38.803 SO libspdk_ut_mock.so.6.0 00:04:38.803 SO libspdk_ut.so.2.0 00:04:38.803 SO libspdk_log.so.7.1 00:04:38.803 SYMLINK libspdk_ut_mock.so 00:04:38.803 SYMLINK libspdk_ut.so 00:04:38.803 SYMLINK libspdk_log.so 00:04:38.803 CC lib/util/base64.o 00:04:38.803 CC lib/util/cpuset.o 00:04:38.803 CC lib/util/bit_array.o 00:04:38.803 CC lib/util/crc16.o 00:04:38.803 CC lib/util/crc32.o 00:04:38.803 CC lib/util/crc32_ieee.o 00:04:38.803 CC lib/util/crc32c.o 00:04:38.803 CC lib/dma/dma.o 00:04:38.803 CC lib/util/crc64.o 00:04:38.803 CC lib/util/dif.o 00:04:38.803 CC lib/util/fd.o 00:04:38.803 CC lib/util/file.o 00:04:38.803 CXX lib/trace_parser/trace.o 00:04:38.803 CC lib/util/fd_group.o 00:04:38.803 CC lib/ioat/ioat.o 00:04:38.803 CC lib/util/hexlify.o 00:04:38.803 CC lib/util/iov.o 00:04:38.803 CC lib/util/math.o 00:04:38.803 CC lib/util/net.o 00:04:38.803 CC lib/util/pipe.o 00:04:38.803 CC lib/util/strerror_tls.o 00:04:38.803 CC lib/util/string.o 00:04:38.803 CC lib/util/uuid.o 00:04:38.803 CC lib/util/xor.o 00:04:38.803 CC lib/util/md5.o 00:04:38.803 CC lib/util/zipf.o 00:04:38.803 CC lib/vfio_user/host/vfio_user_pci.o 00:04:38.803 CC lib/vfio_user/host/vfio_user.o 00:04:38.803 LIB libspdk_dma.a 00:04:38.803 SO libspdk_dma.so.5.0 00:04:38.803 LIB libspdk_ioat.a 00:04:38.803 SYMLINK libspdk_dma.so 00:04:38.803 SO libspdk_ioat.so.7.0 00:04:38.803 LIB libspdk_vfio_user.a 00:04:38.803 SYMLINK libspdk_ioat.so 00:04:38.803 SO libspdk_vfio_user.so.5.0 00:04:38.803 SYMLINK libspdk_vfio_user.so 00:04:38.803 LIB libspdk_util.a 00:04:39.061 SO libspdk_util.so.10.1 00:04:39.061 SYMLINK libspdk_util.so 00:04:39.061 LIB libspdk_trace_parser.a 00:04:39.061 SO libspdk_trace_parser.so.6.0 00:04:39.319 CC lib/json/json_parse.o 00:04:39.319 CC lib/json/json_util.o 00:04:39.319 CC lib/json/json_write.o 00:04:39.319 CC lib/vmd/vmd.o 00:04:39.319 CC lib/vmd/led.o 00:04:39.319 CC lib/rdma_utils/rdma_utils.o 00:04:39.319 CC lib/conf/conf.o 00:04:39.319 CC lib/idxd/idxd.o 00:04:39.319 CC lib/idxd/idxd_user.o 00:04:39.319 CC lib/env_dpdk/memory.o 00:04:39.319 CC lib/env_dpdk/env.o 00:04:39.319 CC lib/idxd/idxd_kernel.o 00:04:39.319 CC lib/env_dpdk/pci.o 00:04:39.319 CC lib/env_dpdk/init.o 00:04:39.319 CC lib/env_dpdk/pci_ioat.o 00:04:39.319 CC lib/env_dpdk/threads.o 00:04:39.319 CC lib/env_dpdk/pci_virtio.o 00:04:39.319 CC lib/env_dpdk/pci_vmd.o 00:04:39.319 CC lib/env_dpdk/pci_idxd.o 00:04:39.319 CC lib/env_dpdk/pci_event.o 00:04:39.319 CC lib/env_dpdk/sigbus_handler.o 00:04:39.319 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:39.319 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:39.319 CC lib/env_dpdk/pci_dpdk.o 00:04:39.319 SYMLINK libspdk_trace_parser.so 00:04:39.319 LIB libspdk_conf.a 00:04:39.578 SO libspdk_conf.so.6.0 00:04:39.578 LIB libspdk_rdma_utils.a 00:04:39.578 LIB libspdk_json.a 00:04:39.578 SO libspdk_rdma_utils.so.1.0 00:04:39.578 SYMLINK libspdk_conf.so 00:04:39.578 SO libspdk_json.so.6.0 00:04:39.578 SYMLINK libspdk_rdma_utils.so 00:04:39.578 SYMLINK libspdk_json.so 00:04:39.836 LIB libspdk_idxd.a 00:04:39.836 CC lib/rdma_provider/common.o 00:04:39.836 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:39.836 CC lib/jsonrpc/jsonrpc_server.o 00:04:39.836 CC lib/jsonrpc/jsonrpc_client.o 00:04:39.836 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:39.836 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:39.836 SO libspdk_idxd.so.12.1 00:04:39.836 LIB libspdk_vmd.a 00:04:39.836 SO libspdk_vmd.so.6.0 00:04:39.836 SYMLINK libspdk_idxd.so 00:04:39.836 SYMLINK libspdk_vmd.so 00:04:39.836 LIB libspdk_rdma_provider.a 00:04:39.836 SO libspdk_rdma_provider.so.7.0 00:04:39.836 LIB libspdk_jsonrpc.a 00:04:40.095 SYMLINK libspdk_rdma_provider.so 00:04:40.096 SO libspdk_jsonrpc.so.6.0 00:04:40.096 SYMLINK libspdk_jsonrpc.so 00:04:40.096 LIB libspdk_env_dpdk.a 00:04:40.096 SO libspdk_env_dpdk.so.15.1 00:04:40.096 CC lib/rpc/rpc.o 00:04:40.355 SYMLINK libspdk_env_dpdk.so 00:04:40.355 LIB libspdk_rpc.a 00:04:40.355 SO libspdk_rpc.so.6.0 00:04:40.355 SYMLINK libspdk_rpc.so 00:04:40.615 CC lib/notify/notify.o 00:04:40.615 CC lib/trace/trace.o 00:04:40.615 CC lib/notify/notify_rpc.o 00:04:40.615 CC lib/trace/trace_flags.o 00:04:40.615 CC lib/trace/trace_rpc.o 00:04:40.615 CC lib/keyring/keyring.o 00:04:40.615 CC lib/keyring/keyring_rpc.o 00:04:40.875 LIB libspdk_notify.a 00:04:40.875 SO libspdk_notify.so.6.0 00:04:40.875 LIB libspdk_keyring.a 00:04:40.875 SO libspdk_keyring.so.2.0 00:04:40.875 SYMLINK libspdk_notify.so 00:04:40.875 LIB libspdk_trace.a 00:04:40.875 SYMLINK libspdk_keyring.so 00:04:40.875 SO libspdk_trace.so.11.0 00:04:40.875 SYMLINK libspdk_trace.so 00:04:41.133 CC lib/thread/thread.o 00:04:41.133 CC lib/thread/iobuf.o 00:04:41.133 CC lib/sock/sock.o 00:04:41.133 CC lib/sock/sock_rpc.o 00:04:41.419 LIB libspdk_sock.a 00:04:41.419 SO libspdk_sock.so.10.0 00:04:41.679 SYMLINK libspdk_sock.so 00:04:41.679 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:41.679 CC lib/nvme/nvme_ctrlr.o 00:04:41.679 CC lib/nvme/nvme_fabric.o 00:04:41.679 CC lib/nvme/nvme_ns_cmd.o 00:04:41.679 CC lib/nvme/nvme_ns.o 00:04:41.679 CC lib/nvme/nvme_pcie_common.o 00:04:41.679 CC lib/nvme/nvme_pcie.o 00:04:41.679 CC lib/nvme/nvme_qpair.o 00:04:41.679 CC lib/nvme/nvme.o 00:04:41.679 CC lib/nvme/nvme_quirks.o 00:04:41.679 CC lib/nvme/nvme_discovery.o 00:04:41.679 CC lib/nvme/nvme_transport.o 00:04:41.679 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:41.679 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:41.679 CC lib/nvme/nvme_tcp.o 00:04:41.679 CC lib/nvme/nvme_opal.o 00:04:41.679 CC lib/nvme/nvme_io_msg.o 00:04:41.679 CC lib/nvme/nvme_poll_group.o 00:04:41.679 CC lib/nvme/nvme_zns.o 00:04:41.679 CC lib/nvme/nvme_stubs.o 00:04:41.679 CC lib/nvme/nvme_auth.o 00:04:41.679 CC lib/nvme/nvme_cuse.o 00:04:41.679 CC lib/nvme/nvme_vfio_user.o 00:04:41.679 CC lib/nvme/nvme_rdma.o 00:04:42.617 LIB libspdk_thread.a 00:04:42.617 SO libspdk_thread.so.11.0 00:04:42.617 SYMLINK libspdk_thread.so 00:04:42.617 CC lib/init/json_config.o 00:04:42.617 CC lib/vfu_tgt/tgt_endpoint.o 00:04:42.617 CC lib/init/subsystem.o 00:04:42.617 CC lib/vfu_tgt/tgt_rpc.o 00:04:42.617 CC lib/init/subsystem_rpc.o 00:04:42.617 CC lib/accel/accel.o 00:04:42.617 CC lib/init/rpc.o 00:04:42.617 CC lib/accel/accel_rpc.o 00:04:42.617 CC lib/accel/accel_sw.o 00:04:42.617 CC lib/virtio/virtio_vhost_user.o 00:04:42.617 CC lib/virtio/virtio.o 00:04:42.617 CC lib/blob/blobstore.o 00:04:42.617 CC lib/blob/zeroes.o 00:04:42.617 CC lib/virtio/virtio_vfio_user.o 00:04:42.617 CC lib/blob/request.o 00:04:42.617 CC lib/virtio/virtio_pci.o 00:04:42.617 CC lib/fsdev/fsdev.o 00:04:42.617 CC lib/blob/blob_bs_dev.o 00:04:42.617 CC lib/fsdev/fsdev_io.o 00:04:42.617 CC lib/fsdev/fsdev_rpc.o 00:04:42.876 LIB libspdk_init.a 00:04:42.876 SO libspdk_init.so.6.0 00:04:42.876 SYMLINK libspdk_init.so 00:04:42.876 LIB libspdk_virtio.a 00:04:42.876 LIB libspdk_vfu_tgt.a 00:04:42.876 SO libspdk_vfu_tgt.so.3.0 00:04:42.876 SO libspdk_virtio.so.7.0 00:04:43.134 SYMLINK libspdk_vfu_tgt.so 00:04:43.134 SYMLINK libspdk_virtio.so 00:04:43.134 CC lib/event/app.o 00:04:43.134 CC lib/event/reactor.o 00:04:43.134 CC lib/event/log_rpc.o 00:04:43.134 CC lib/event/scheduler_static.o 00:04:43.134 CC lib/event/app_rpc.o 00:04:43.134 LIB libspdk_fsdev.a 00:04:43.134 SO libspdk_fsdev.so.2.0 00:04:43.134 SYMLINK libspdk_fsdev.so 00:04:43.394 LIB libspdk_accel.a 00:04:43.394 SO libspdk_accel.so.16.0 00:04:43.394 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:43.394 SYMLINK libspdk_accel.so 00:04:43.394 LIB libspdk_event.a 00:04:43.394 SO libspdk_event.so.14.0 00:04:43.654 SYMLINK libspdk_event.so 00:04:43.654 LIB libspdk_nvme.a 00:04:43.654 CC lib/bdev/bdev.o 00:04:43.654 CC lib/bdev/bdev_rpc.o 00:04:43.654 CC lib/bdev/bdev_zone.o 00:04:43.654 CC lib/bdev/part.o 00:04:43.654 CC lib/bdev/scsi_nvme.o 00:04:43.654 SO libspdk_nvme.so.15.0 00:04:43.914 SYMLINK libspdk_nvme.so 00:04:43.914 LIB libspdk_fuse_dispatcher.a 00:04:43.914 SO libspdk_fuse_dispatcher.so.1.0 00:04:43.914 SYMLINK libspdk_fuse_dispatcher.so 00:04:44.852 LIB libspdk_blob.a 00:04:44.852 SO libspdk_blob.so.12.0 00:04:44.852 SYMLINK libspdk_blob.so 00:04:45.112 CC lib/lvol/lvol.o 00:04:45.112 CC lib/blobfs/blobfs.o 00:04:45.112 CC lib/blobfs/tree.o 00:04:45.689 LIB libspdk_bdev.a 00:04:45.689 SO libspdk_bdev.so.17.0 00:04:45.949 LIB libspdk_blobfs.a 00:04:45.949 SYMLINK libspdk_bdev.so 00:04:45.949 SO libspdk_blobfs.so.11.0 00:04:45.949 LIB libspdk_lvol.a 00:04:45.949 SO libspdk_lvol.so.11.0 00:04:45.949 SYMLINK libspdk_blobfs.so 00:04:45.949 SYMLINK libspdk_lvol.so 00:04:45.949 CC lib/scsi/dev.o 00:04:45.949 CC lib/scsi/lun.o 00:04:45.949 CC lib/scsi/scsi.o 00:04:45.949 CC lib/scsi/port.o 00:04:45.949 CC lib/nbd/nbd.o 00:04:45.949 CC lib/scsi/scsi_bdev.o 00:04:45.949 CC lib/ftl/ftl_core.o 00:04:45.949 CC lib/ftl/ftl_init.o 00:04:45.949 CC lib/scsi/scsi_pr.o 00:04:45.949 CC lib/scsi/task.o 00:04:45.949 CC lib/nbd/nbd_rpc.o 00:04:45.949 CC lib/scsi/scsi_rpc.o 00:04:45.949 CC lib/ftl/ftl_layout.o 00:04:45.949 CC lib/ublk/ublk.o 00:04:45.949 CC lib/ublk/ublk_rpc.o 00:04:45.949 CC lib/ftl/ftl_debug.o 00:04:45.949 CC lib/ftl/ftl_io.o 00:04:45.949 CC lib/ftl/ftl_sb.o 00:04:45.949 CC lib/nvmf/ctrlr.o 00:04:45.949 CC lib/ftl/ftl_l2p.o 00:04:45.949 CC lib/ftl/ftl_nv_cache.o 00:04:45.949 CC lib/ftl/ftl_l2p_flat.o 00:04:45.949 CC lib/nvmf/ctrlr_discovery.o 00:04:45.949 CC lib/ftl/ftl_band_ops.o 00:04:45.949 CC lib/ftl/ftl_writer.o 00:04:45.949 CC lib/nvmf/subsystem.o 00:04:45.949 CC lib/ftl/ftl_band.o 00:04:45.949 CC lib/nvmf/ctrlr_bdev.o 00:04:45.949 CC lib/nvmf/nvmf.o 00:04:45.949 CC lib/ftl/ftl_reloc.o 00:04:45.949 CC lib/ftl/ftl_l2p_cache.o 00:04:45.949 CC lib/nvmf/transport.o 00:04:45.949 CC lib/ftl/ftl_p2l.o 00:04:45.949 CC lib/ftl/ftl_rq.o 00:04:45.949 CC lib/nvmf/nvmf_rpc.o 00:04:45.949 CC lib/ftl/ftl_p2l_log.o 00:04:45.949 CC lib/nvmf/stubs.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:45.949 CC lib/nvmf/tcp.o 00:04:45.949 CC lib/nvmf/mdns_server.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt.o 00:04:45.949 CC lib/nvmf/rdma.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.949 CC lib/nvmf/vfio_user.o 00:04:45.949 CC lib/nvmf/auth.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.949 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.950 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.950 CC lib/ftl/utils/ftl_conf.o 00:04:45.950 CC lib/ftl/utils/ftl_mempool.o 00:04:45.950 CC lib/ftl/utils/ftl_md.o 00:04:45.950 CC lib/ftl/utils/ftl_bitmap.o 00:04:45.950 CC lib/ftl/utils/ftl_property.o 00:04:45.950 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:45.950 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:45.950 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:45.950 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:45.950 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:45.950 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:45.950 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:45.950 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:45.950 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:45.950 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:45.950 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:45.950 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:45.950 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:45.950 CC lib/ftl/base/ftl_base_bdev.o 00:04:45.950 CC lib/ftl/base/ftl_base_dev.o 00:04:46.208 CC lib/ftl/ftl_trace.o 00:04:46.467 LIB libspdk_scsi.a 00:04:46.467 LIB libspdk_nbd.a 00:04:46.467 SO libspdk_scsi.so.9.0 00:04:46.467 SO libspdk_nbd.so.7.0 00:04:46.726 SYMLINK libspdk_nbd.so 00:04:46.726 SYMLINK libspdk_scsi.so 00:04:46.726 LIB libspdk_ublk.a 00:04:46.726 SO libspdk_ublk.so.3.0 00:04:46.726 SYMLINK libspdk_ublk.so 00:04:46.726 CC lib/iscsi/conn.o 00:04:46.726 CC lib/iscsi/init_grp.o 00:04:46.726 CC lib/iscsi/iscsi.o 00:04:46.726 CC lib/iscsi/portal_grp.o 00:04:46.726 CC lib/iscsi/param.o 00:04:46.726 CC lib/iscsi/iscsi_subsystem.o 00:04:46.726 CC lib/iscsi/iscsi_rpc.o 00:04:46.726 CC lib/iscsi/tgt_node.o 00:04:46.726 CC lib/vhost/vhost.o 00:04:46.726 CC lib/iscsi/task.o 00:04:46.726 CC lib/vhost/vhost_scsi.o 00:04:46.726 CC lib/vhost/vhost_rpc.o 00:04:46.726 CC lib/vhost/vhost_blk.o 00:04:46.726 CC lib/vhost/rte_vhost_user.o 00:04:46.726 LIB libspdk_ftl.a 00:04:46.986 SO libspdk_ftl.so.9.0 00:04:47.246 SYMLINK libspdk_ftl.so 00:04:47.814 LIB libspdk_vhost.a 00:04:47.814 LIB libspdk_nvmf.a 00:04:47.814 SO libspdk_vhost.so.8.0 00:04:47.814 SO libspdk_nvmf.so.20.0 00:04:47.814 SYMLINK libspdk_vhost.so 00:04:47.814 LIB libspdk_iscsi.a 00:04:47.814 SO libspdk_iscsi.so.8.0 00:04:47.814 SYMLINK libspdk_nvmf.so 00:04:48.073 SYMLINK libspdk_iscsi.so 00:04:48.073 CC module/vfu_device/vfu_virtio_blk.o 00:04:48.073 CC module/vfu_device/vfu_virtio.o 00:04:48.331 CC module/vfu_device/vfu_virtio_fs.o 00:04:48.331 CC module/vfu_device/vfu_virtio_scsi.o 00:04:48.331 CC module/vfu_device/vfu_virtio_rpc.o 00:04:48.331 CC module/env_dpdk/env_dpdk_rpc.o 00:04:48.331 CC module/accel/ioat/accel_ioat.o 00:04:48.331 CC module/accel/ioat/accel_ioat_rpc.o 00:04:48.331 CC module/fsdev/aio/fsdev_aio.o 00:04:48.331 CC module/accel/error/accel_error.o 00:04:48.331 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:48.331 CC module/fsdev/aio/linux_aio_mgr.o 00:04:48.331 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:48.331 CC module/accel/error/accel_error_rpc.o 00:04:48.331 CC module/scheduler/gscheduler/gscheduler.o 00:04:48.331 CC module/accel/dsa/accel_dsa.o 00:04:48.331 CC module/accel/dsa/accel_dsa_rpc.o 00:04:48.331 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:48.331 CC module/keyring/file/keyring.o 00:04:48.331 CC module/keyring/file/keyring_rpc.o 00:04:48.331 CC module/sock/posix/posix.o 00:04:48.331 CC module/blob/bdev/blob_bdev.o 00:04:48.331 CC module/accel/iaa/accel_iaa.o 00:04:48.331 CC module/accel/iaa/accel_iaa_rpc.o 00:04:48.331 CC module/keyring/linux/keyring.o 00:04:48.331 CC module/keyring/linux/keyring_rpc.o 00:04:48.331 LIB libspdk_env_dpdk_rpc.a 00:04:48.331 SO libspdk_env_dpdk_rpc.so.6.0 00:04:48.331 SYMLINK libspdk_env_dpdk_rpc.so 00:04:48.331 LIB libspdk_scheduler_gscheduler.a 00:04:48.331 LIB libspdk_scheduler_dpdk_governor.a 00:04:48.331 LIB libspdk_accel_ioat.a 00:04:48.331 SO libspdk_scheduler_gscheduler.so.4.0 00:04:48.331 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:48.331 SO libspdk_accel_ioat.so.6.0 00:04:48.331 LIB libspdk_keyring_file.a 00:04:48.331 LIB libspdk_keyring_linux.a 00:04:48.331 LIB libspdk_scheduler_dynamic.a 00:04:48.331 SO libspdk_keyring_file.so.2.0 00:04:48.331 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:48.331 SO libspdk_keyring_linux.so.1.0 00:04:48.331 SO libspdk_scheduler_dynamic.so.4.0 00:04:48.331 SYMLINK libspdk_scheduler_gscheduler.so 00:04:48.590 SYMLINK libspdk_accel_ioat.so 00:04:48.590 LIB libspdk_accel_error.a 00:04:48.590 SYMLINK libspdk_keyring_file.so 00:04:48.590 LIB libspdk_accel_iaa.a 00:04:48.590 LIB libspdk_accel_dsa.a 00:04:48.590 SYMLINK libspdk_scheduler_dynamic.so 00:04:48.590 SYMLINK libspdk_keyring_linux.so 00:04:48.590 SO libspdk_accel_error.so.2.0 00:04:48.590 SO libspdk_accel_iaa.so.3.0 00:04:48.590 SO libspdk_accel_dsa.so.5.0 00:04:48.590 LIB libspdk_blob_bdev.a 00:04:48.590 SYMLINK libspdk_accel_iaa.so 00:04:48.590 SYMLINK libspdk_accel_dsa.so 00:04:48.590 SYMLINK libspdk_accel_error.so 00:04:48.590 SO libspdk_blob_bdev.so.12.0 00:04:48.590 SYMLINK libspdk_blob_bdev.so 00:04:48.848 LIB libspdk_vfu_device.a 00:04:48.848 SO libspdk_vfu_device.so.3.0 00:04:48.848 LIB libspdk_sock_posix.a 00:04:48.848 LIB libspdk_fsdev_aio.a 00:04:48.848 SYMLINK libspdk_vfu_device.so 00:04:48.848 CC module/bdev/delay/vbdev_delay.o 00:04:48.848 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:48.848 CC module/bdev/gpt/gpt.o 00:04:48.848 CC module/bdev/malloc/bdev_malloc.o 00:04:48.848 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:48.848 CC module/bdev/lvol/vbdev_lvol.o 00:04:48.848 CC module/bdev/gpt/vbdev_gpt.o 00:04:48.848 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:48.848 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.848 CC module/bdev/error/vbdev_error.o 00:04:48.848 CC module/bdev/iscsi/bdev_iscsi.o 00:04:48.848 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.848 CC module/bdev/aio/bdev_aio.o 00:04:48.848 CC module/bdev/raid/bdev_raid.o 00:04:48.848 CC module/bdev/ftl/bdev_ftl.o 00:04:48.848 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.848 CC module/bdev/aio/bdev_aio_rpc.o 00:04:48.848 CC module/bdev/raid/raid0.o 00:04:48.848 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:48.848 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.848 CC module/bdev/raid/raid1.o 00:04:48.848 CC module/bdev/passthru/vbdev_passthru.o 00:04:48.848 CC module/bdev/error/vbdev_error_rpc.o 00:04:48.848 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:48.848 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.848 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:48.848 CC module/bdev/raid/concat.o 00:04:48.848 CC module/blobfs/bdev/blobfs_bdev.o 00:04:48.848 SO libspdk_sock_posix.so.6.0 00:04:48.848 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:48.848 CC module/bdev/null/bdev_null.o 00:04:48.848 CC module/bdev/split/vbdev_split.o 00:04:48.848 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:48.848 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:48.848 CC module/bdev/nvme/bdev_nvme.o 00:04:48.848 CC module/bdev/null/bdev_null_rpc.o 00:04:48.848 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:48.848 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.848 CC module/bdev/nvme/bdev_mdns_client.o 00:04:48.848 CC module/bdev/nvme/nvme_rpc.o 00:04:48.848 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:48.848 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:48.848 CC module/bdev/nvme/vbdev_opal.o 00:04:48.848 SO libspdk_fsdev_aio.so.1.0 00:04:48.848 SYMLINK libspdk_fsdev_aio.so 00:04:48.848 SYMLINK libspdk_sock_posix.so 00:04:49.106 LIB libspdk_blobfs_bdev.a 00:04:49.106 LIB libspdk_bdev_null.a 00:04:49.106 SO libspdk_bdev_null.so.6.0 00:04:49.106 LIB libspdk_bdev_ftl.a 00:04:49.106 SO libspdk_blobfs_bdev.so.6.0 00:04:49.106 LIB libspdk_bdev_passthru.a 00:04:49.106 LIB libspdk_bdev_aio.a 00:04:49.106 LIB libspdk_bdev_split.a 00:04:49.106 SO libspdk_bdev_ftl.so.6.0 00:04:49.106 SO libspdk_bdev_passthru.so.6.0 00:04:49.106 SO libspdk_bdev_aio.so.6.0 00:04:49.106 SYMLINK libspdk_blobfs_bdev.so 00:04:49.106 SYMLINK libspdk_bdev_null.so 00:04:49.106 LIB libspdk_bdev_error.a 00:04:49.106 SO libspdk_bdev_split.so.6.0 00:04:49.106 LIB libspdk_bdev_gpt.a 00:04:49.106 SO libspdk_bdev_error.so.6.0 00:04:49.106 SO libspdk_bdev_gpt.so.6.0 00:04:49.106 SYMLINK libspdk_bdev_passthru.so 00:04:49.106 SYMLINK libspdk_bdev_ftl.so 00:04:49.106 SYMLINK libspdk_bdev_aio.so 00:04:49.106 SYMLINK libspdk_bdev_split.so 00:04:49.106 LIB libspdk_bdev_zone_block.a 00:04:49.106 LIB libspdk_bdev_delay.a 00:04:49.106 SYMLINK libspdk_bdev_error.so 00:04:49.106 SYMLINK libspdk_bdev_gpt.so 00:04:49.106 LIB libspdk_bdev_malloc.a 00:04:49.106 SO libspdk_bdev_zone_block.so.6.0 00:04:49.106 SO libspdk_bdev_delay.so.6.0 00:04:49.106 LIB libspdk_bdev_lvol.a 00:04:49.106 LIB libspdk_bdev_iscsi.a 00:04:49.106 SO libspdk_bdev_malloc.so.6.0 00:04:49.106 SO libspdk_bdev_lvol.so.6.0 00:04:49.106 SO libspdk_bdev_iscsi.so.6.0 00:04:49.395 SYMLINK libspdk_bdev_zone_block.so 00:04:49.395 SYMLINK libspdk_bdev_delay.so 00:04:49.395 SYMLINK libspdk_bdev_iscsi.so 00:04:49.395 SYMLINK libspdk_bdev_malloc.so 00:04:49.395 SYMLINK libspdk_bdev_lvol.so 00:04:49.395 LIB libspdk_bdev_virtio.a 00:04:49.395 SO libspdk_bdev_virtio.so.6.0 00:04:49.395 SYMLINK libspdk_bdev_virtio.so 00:04:49.655 LIB libspdk_bdev_raid.a 00:04:49.655 SO libspdk_bdev_raid.so.6.0 00:04:49.916 SYMLINK libspdk_bdev_raid.so 00:04:50.852 LIB libspdk_bdev_nvme.a 00:04:50.852 SO libspdk_bdev_nvme.so.7.1 00:04:50.852 SYMLINK libspdk_bdev_nvme.so 00:04:51.419 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:51.419 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:51.419 CC module/event/subsystems/vmd/vmd.o 00:04:51.419 CC module/event/subsystems/sock/sock.o 00:04:51.419 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:51.419 CC module/event/subsystems/keyring/keyring.o 00:04:51.419 CC module/event/subsystems/fsdev/fsdev.o 00:04:51.419 CC module/event/subsystems/iobuf/iobuf.o 00:04:51.419 CC module/event/subsystems/scheduler/scheduler.o 00:04:51.419 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:51.419 LIB libspdk_event_scheduler.a 00:04:51.419 SO libspdk_event_scheduler.so.4.0 00:04:51.419 LIB libspdk_event_keyring.a 00:04:51.419 LIB libspdk_event_vhost_blk.a 00:04:51.419 LIB libspdk_event_vmd.a 00:04:51.419 LIB libspdk_event_fsdev.a 00:04:51.419 LIB libspdk_event_vfu_tgt.a 00:04:51.419 LIB libspdk_event_sock.a 00:04:51.419 SO libspdk_event_keyring.so.1.0 00:04:51.419 SO libspdk_event_vhost_blk.so.3.0 00:04:51.419 SO libspdk_event_vmd.so.6.0 00:04:51.419 LIB libspdk_event_iobuf.a 00:04:51.419 SO libspdk_event_sock.so.5.0 00:04:51.419 SO libspdk_event_fsdev.so.1.0 00:04:51.419 SYMLINK libspdk_event_scheduler.so 00:04:51.419 SO libspdk_event_vfu_tgt.so.3.0 00:04:51.419 SO libspdk_event_iobuf.so.3.0 00:04:51.419 SYMLINK libspdk_event_vhost_blk.so 00:04:51.419 SYMLINK libspdk_event_keyring.so 00:04:51.419 SYMLINK libspdk_event_vmd.so 00:04:51.419 SYMLINK libspdk_event_fsdev.so 00:04:51.419 SYMLINK libspdk_event_vfu_tgt.so 00:04:51.419 SYMLINK libspdk_event_sock.so 00:04:51.419 SYMLINK libspdk_event_iobuf.so 00:04:51.678 CC module/event/subsystems/accel/accel.o 00:04:51.678 LIB libspdk_event_accel.a 00:04:51.678 SO libspdk_event_accel.so.6.0 00:04:51.937 SYMLINK libspdk_event_accel.so 00:04:51.937 CC module/event/subsystems/bdev/bdev.o 00:04:52.196 LIB libspdk_event_bdev.a 00:04:52.196 SO libspdk_event_bdev.so.6.0 00:04:52.196 SYMLINK libspdk_event_bdev.so 00:04:52.455 CC module/event/subsystems/scsi/scsi.o 00:04:52.455 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:52.455 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:52.455 CC module/event/subsystems/ublk/ublk.o 00:04:52.455 CC module/event/subsystems/nbd/nbd.o 00:04:52.455 LIB libspdk_event_nbd.a 00:04:52.455 SO libspdk_event_nbd.so.6.0 00:04:52.455 LIB libspdk_event_ublk.a 00:04:52.455 LIB libspdk_event_scsi.a 00:04:52.455 SO libspdk_event_ublk.so.3.0 00:04:52.455 SO libspdk_event_scsi.so.6.0 00:04:52.455 SYMLINK libspdk_event_nbd.so 00:04:52.455 LIB libspdk_event_nvmf.a 00:04:52.455 SO libspdk_event_nvmf.so.6.0 00:04:52.455 SYMLINK libspdk_event_scsi.so 00:04:52.455 SYMLINK libspdk_event_ublk.so 00:04:52.455 SYMLINK libspdk_event_nvmf.so 00:04:52.714 CC module/event/subsystems/iscsi/iscsi.o 00:04:52.714 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:52.714 LIB libspdk_event_vhost_scsi.a 00:04:52.714 LIB libspdk_event_iscsi.a 00:04:52.714 SO libspdk_event_vhost_scsi.so.3.0 00:04:52.972 SO libspdk_event_iscsi.so.6.0 00:04:52.972 SYMLINK libspdk_event_vhost_scsi.so 00:04:52.972 SYMLINK libspdk_event_iscsi.so 00:04:52.972 SO libspdk.so.6.0 00:04:52.972 SYMLINK libspdk.so 00:04:53.233 CC app/spdk_top/spdk_top.o 00:04:53.233 CC app/trace_record/trace_record.o 00:04:53.233 CC app/spdk_nvme_discover/discovery_aer.o 00:04:53.233 CXX app/trace/trace.o 00:04:53.233 CC app/spdk_lspci/spdk_lspci.o 00:04:53.233 CC app/spdk_nvme_perf/perf.o 00:04:53.233 TEST_HEADER include/spdk/accel.h 00:04:53.233 CC app/spdk_nvme_identify/identify.o 00:04:53.233 TEST_HEADER include/spdk/accel_module.h 00:04:53.233 TEST_HEADER include/spdk/barrier.h 00:04:53.233 TEST_HEADER include/spdk/assert.h 00:04:53.233 TEST_HEADER include/spdk/base64.h 00:04:53.233 TEST_HEADER include/spdk/bdev.h 00:04:53.233 TEST_HEADER include/spdk/bit_array.h 00:04:53.233 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.233 TEST_HEADER include/spdk/bdev_module.h 00:04:53.233 CC test/rpc_client/rpc_client_test.o 00:04:53.233 TEST_HEADER include/spdk/bit_pool.h 00:04:53.233 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.233 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.233 TEST_HEADER include/spdk/blobfs.h 00:04:53.233 TEST_HEADER include/spdk/blob.h 00:04:53.233 TEST_HEADER include/spdk/conf.h 00:04:53.233 TEST_HEADER include/spdk/config.h 00:04:53.233 TEST_HEADER include/spdk/cpuset.h 00:04:53.233 TEST_HEADER include/spdk/crc16.h 00:04:53.233 TEST_HEADER include/spdk/crc32.h 00:04:53.233 TEST_HEADER include/spdk/dma.h 00:04:53.233 TEST_HEADER include/spdk/crc64.h 00:04:53.233 TEST_HEADER include/spdk/dif.h 00:04:53.233 TEST_HEADER include/spdk/endian.h 00:04:53.233 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.233 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.233 TEST_HEADER include/spdk/event.h 00:04:53.233 TEST_HEADER include/spdk/env.h 00:04:53.233 TEST_HEADER include/spdk/fd_group.h 00:04:53.233 TEST_HEADER include/spdk/fd.h 00:04:53.233 TEST_HEADER include/spdk/file.h 00:04:53.233 TEST_HEADER include/spdk/fsdev.h 00:04:53.233 TEST_HEADER include/spdk/ftl.h 00:04:53.233 TEST_HEADER include/spdk/fsdev_module.h 00:04:53.233 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:53.233 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.233 TEST_HEADER include/spdk/hexlify.h 00:04:53.233 TEST_HEADER include/spdk/histogram_data.h 00:04:53.233 TEST_HEADER include/spdk/idxd.h 00:04:53.233 TEST_HEADER include/spdk/init.h 00:04:53.233 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.233 TEST_HEADER include/spdk/ioat.h 00:04:53.233 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.233 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.233 TEST_HEADER include/spdk/json.h 00:04:53.233 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.233 TEST_HEADER include/spdk/keyring.h 00:04:53.233 TEST_HEADER include/spdk/keyring_module.h 00:04:53.233 TEST_HEADER include/spdk/log.h 00:04:53.233 TEST_HEADER include/spdk/lvol.h 00:04:53.233 TEST_HEADER include/spdk/md5.h 00:04:53.233 TEST_HEADER include/spdk/memory.h 00:04:53.233 CC app/nvmf_tgt/nvmf_main.o 00:04:53.233 TEST_HEADER include/spdk/likely.h 00:04:53.233 TEST_HEADER include/spdk/nbd.h 00:04:53.233 TEST_HEADER include/spdk/mmio.h 00:04:53.233 TEST_HEADER include/spdk/net.h 00:04:53.233 TEST_HEADER include/spdk/notify.h 00:04:53.233 TEST_HEADER include/spdk/nvme.h 00:04:53.233 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.233 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.233 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.233 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.233 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.233 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.233 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.233 TEST_HEADER include/spdk/nvmf.h 00:04:53.233 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.233 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.233 TEST_HEADER include/spdk/opal.h 00:04:53.233 TEST_HEADER include/spdk/opal_spec.h 00:04:53.233 TEST_HEADER include/spdk/pci_ids.h 00:04:53.233 CC app/spdk_dd/spdk_dd.o 00:04:53.233 TEST_HEADER include/spdk/pipe.h 00:04:53.233 TEST_HEADER include/spdk/queue.h 00:04:53.233 CC app/iscsi_tgt/iscsi_tgt.o 00:04:53.233 TEST_HEADER include/spdk/reduce.h 00:04:53.233 TEST_HEADER include/spdk/rpc.h 00:04:53.233 TEST_HEADER include/spdk/scheduler.h 00:04:53.233 TEST_HEADER include/spdk/scsi.h 00:04:53.233 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.233 TEST_HEADER include/spdk/sock.h 00:04:53.233 TEST_HEADER include/spdk/stdinc.h 00:04:53.233 TEST_HEADER include/spdk/string.h 00:04:53.233 TEST_HEADER include/spdk/thread.h 00:04:53.233 TEST_HEADER include/spdk/trace.h 00:04:53.233 TEST_HEADER include/spdk/trace_parser.h 00:04:53.233 TEST_HEADER include/spdk/tree.h 00:04:53.233 TEST_HEADER include/spdk/ublk.h 00:04:53.233 TEST_HEADER include/spdk/util.h 00:04:53.233 TEST_HEADER include/spdk/uuid.h 00:04:53.233 TEST_HEADER include/spdk/version.h 00:04:53.233 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.233 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.233 TEST_HEADER include/spdk/vhost.h 00:04:53.233 TEST_HEADER include/spdk/vmd.h 00:04:53.233 TEST_HEADER include/spdk/xor.h 00:04:53.233 TEST_HEADER include/spdk/zipf.h 00:04:53.233 CXX test/cpp_headers/accel.o 00:04:53.233 CXX test/cpp_headers/accel_module.o 00:04:53.233 CXX test/cpp_headers/assert.o 00:04:53.233 CXX test/cpp_headers/barrier.o 00:04:53.233 CXX test/cpp_headers/base64.o 00:04:53.233 CXX test/cpp_headers/bdev.o 00:04:53.233 CXX test/cpp_headers/bdev_module.o 00:04:53.233 CXX test/cpp_headers/bdev_zone.o 00:04:53.233 CXX test/cpp_headers/bit_array.o 00:04:53.233 CXX test/cpp_headers/bit_pool.o 00:04:53.233 CXX test/cpp_headers/blob_bdev.o 00:04:53.233 CXX test/cpp_headers/blobfs_bdev.o 00:04:53.233 CXX test/cpp_headers/blobfs.o 00:04:53.233 CC app/spdk_tgt/spdk_tgt.o 00:04:53.233 CXX test/cpp_headers/blob.o 00:04:53.233 CXX test/cpp_headers/conf.o 00:04:53.233 CXX test/cpp_headers/cpuset.o 00:04:53.233 CXX test/cpp_headers/config.o 00:04:53.233 CXX test/cpp_headers/crc32.o 00:04:53.233 CXX test/cpp_headers/crc64.o 00:04:53.233 CXX test/cpp_headers/crc16.o 00:04:53.233 CXX test/cpp_headers/dma.o 00:04:53.233 CXX test/cpp_headers/dif.o 00:04:53.233 CXX test/cpp_headers/endian.o 00:04:53.233 CXX test/cpp_headers/env_dpdk.o 00:04:53.233 CXX test/cpp_headers/env.o 00:04:53.233 CXX test/cpp_headers/fd_group.o 00:04:53.233 CXX test/cpp_headers/event.o 00:04:53.233 CXX test/cpp_headers/fd.o 00:04:53.233 CXX test/cpp_headers/file.o 00:04:53.233 CXX test/cpp_headers/fsdev.o 00:04:53.233 CXX test/cpp_headers/fsdev_module.o 00:04:53.233 CXX test/cpp_headers/ftl.o 00:04:53.233 CXX test/cpp_headers/fuse_dispatcher.o 00:04:53.233 CXX test/cpp_headers/gpt_spec.o 00:04:53.233 CXX test/cpp_headers/hexlify.o 00:04:53.233 CXX test/cpp_headers/histogram_data.o 00:04:53.233 CXX test/cpp_headers/idxd.o 00:04:53.233 CXX test/cpp_headers/idxd_spec.o 00:04:53.233 CXX test/cpp_headers/ioat.o 00:04:53.233 CXX test/cpp_headers/init.o 00:04:53.233 CXX test/cpp_headers/iscsi_spec.o 00:04:53.233 CXX test/cpp_headers/ioat_spec.o 00:04:53.233 CXX test/cpp_headers/jsonrpc.o 00:04:53.233 CXX test/cpp_headers/keyring.o 00:04:53.233 CXX test/cpp_headers/json.o 00:04:53.233 CXX test/cpp_headers/likely.o 00:04:53.233 CXX test/cpp_headers/keyring_module.o 00:04:53.233 CXX test/cpp_headers/lvol.o 00:04:53.233 CXX test/cpp_headers/md5.o 00:04:53.233 CXX test/cpp_headers/log.o 00:04:53.233 CXX test/cpp_headers/mmio.o 00:04:53.234 CXX test/cpp_headers/nbd.o 00:04:53.234 CXX test/cpp_headers/memory.o 00:04:53.234 CXX test/cpp_headers/nvme.o 00:04:53.234 CXX test/cpp_headers/net.o 00:04:53.234 CXX test/cpp_headers/notify.o 00:04:53.234 CXX test/cpp_headers/nvme_intel.o 00:04:53.234 CXX test/cpp_headers/nvme_ocssd.o 00:04:53.234 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:53.234 CXX test/cpp_headers/nvme_spec.o 00:04:53.234 CXX test/cpp_headers/nvmf_cmd.o 00:04:53.234 CXX test/cpp_headers/nvme_zns.o 00:04:53.234 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:53.234 CC examples/ioat/verify/verify.o 00:04:53.234 CXX test/cpp_headers/nvmf_spec.o 00:04:53.234 CXX test/cpp_headers/nvmf.o 00:04:53.234 CXX test/cpp_headers/nvmf_transport.o 00:04:53.234 CXX test/cpp_headers/opal.o 00:04:53.234 CXX test/cpp_headers/opal_spec.o 00:04:53.234 CXX test/cpp_headers/pipe.o 00:04:53.234 CXX test/cpp_headers/pci_ids.o 00:04:53.234 CXX test/cpp_headers/queue.o 00:04:53.234 CXX test/cpp_headers/reduce.o 00:04:53.234 CXX test/cpp_headers/scheduler.o 00:04:53.234 CXX test/cpp_headers/stdinc.o 00:04:53.234 CXX test/cpp_headers/rpc.o 00:04:53.234 CXX test/cpp_headers/scsi.o 00:04:53.234 CXX test/cpp_headers/scsi_spec.o 00:04:53.234 CXX test/cpp_headers/sock.o 00:04:53.234 CC examples/ioat/perf/perf.o 00:04:53.234 CXX test/cpp_headers/trace.o 00:04:53.234 CXX test/cpp_headers/string.o 00:04:53.234 CXX test/cpp_headers/thread.o 00:04:53.234 CC examples/util/zipf/zipf.o 00:04:53.234 CXX test/cpp_headers/trace_parser.o 00:04:53.234 CXX test/cpp_headers/uuid.o 00:04:53.234 CXX test/cpp_headers/tree.o 00:04:53.234 CXX test/cpp_headers/ublk.o 00:04:53.234 CXX test/cpp_headers/util.o 00:04:53.234 CXX test/cpp_headers/version.o 00:04:53.234 CXX test/cpp_headers/vfio_user_pci.o 00:04:53.234 CXX test/cpp_headers/vfio_user_spec.o 00:04:53.234 CXX test/cpp_headers/vmd.o 00:04:53.234 CXX test/cpp_headers/vhost.o 00:04:53.234 CXX test/cpp_headers/xor.o 00:04:53.234 CC app/fio/nvme/fio_plugin.o 00:04:53.234 CXX test/cpp_headers/zipf.o 00:04:53.234 CC test/thread/poller_perf/poller_perf.o 00:04:53.234 CC test/app/stub/stub.o 00:04:53.234 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:53.234 CC test/env/vtophys/vtophys.o 00:04:53.234 CC test/app/histogram_perf/histogram_perf.o 00:04:53.234 CC test/app/jsoncat/jsoncat.o 00:04:53.234 CC test/env/pci/pci_ut.o 00:04:53.234 CC test/env/memory/memory_ut.o 00:04:53.234 CC app/fio/bdev/fio_plugin.o 00:04:53.519 CC test/dma/test_dma/test_dma.o 00:04:53.519 CC test/app/bdev_svc/bdev_svc.o 00:04:53.519 LINK spdk_lspci 00:04:53.519 LINK rpc_client_test 00:04:53.519 LINK interrupt_tgt 00:04:53.519 CC test/env/mem_callbacks/mem_callbacks.o 00:04:53.778 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:53.778 LINK spdk_nvme_discover 00:04:53.778 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:53.778 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:53.778 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:53.778 LINK iscsi_tgt 00:04:53.778 LINK spdk_tgt 00:04:53.778 LINK env_dpdk_post_init 00:04:53.778 LINK jsoncat 00:04:53.778 LINK nvmf_tgt 00:04:53.778 LINK spdk_trace_record 00:04:53.778 LINK zipf 00:04:54.038 LINK vtophys 00:04:54.038 LINK poller_perf 00:04:54.038 LINK bdev_svc 00:04:54.038 LINK stub 00:04:54.038 LINK histogram_perf 00:04:54.038 LINK verify 00:04:54.038 LINK mem_callbacks 00:04:54.038 LINK ioat_perf 00:04:54.038 LINK spdk_dd 00:04:54.038 LINK pci_ut 00:04:54.038 LINK spdk_trace 00:04:54.038 LINK nvme_fuzz 00:04:54.300 LINK vhost_fuzz 00:04:54.300 CC examples/idxd/perf/perf.o 00:04:54.300 CC examples/vmd/lsvmd/lsvmd.o 00:04:54.300 CC examples/vmd/led/led.o 00:04:54.300 LINK spdk_nvme 00:04:54.300 CC examples/sock/hello_world/hello_sock.o 00:04:54.300 CC examples/thread/thread/thread_ex.o 00:04:54.300 CC test/event/reactor_perf/reactor_perf.o 00:04:54.300 CC test/event/reactor/reactor.o 00:04:54.300 CC test/event/event_perf/event_perf.o 00:04:54.300 LINK spdk_bdev 00:04:54.300 CC test/event/app_repeat/app_repeat.o 00:04:54.300 CC test/event/scheduler/scheduler.o 00:04:54.300 LINK test_dma 00:04:54.300 LINK lsvmd 00:04:54.300 LINK led 00:04:54.300 LINK spdk_nvme_perf 00:04:54.300 LINK reactor_perf 00:04:54.300 LINK reactor 00:04:54.300 LINK event_perf 00:04:54.300 CC app/vhost/vhost.o 00:04:54.300 LINK spdk_nvme_identify 00:04:54.300 LINK memory_ut 00:04:54.300 LINK app_repeat 00:04:54.300 LINK spdk_top 00:04:54.300 LINK thread 00:04:54.300 LINK idxd_perf 00:04:54.300 LINK scheduler 00:04:54.300 LINK hello_sock 00:04:54.560 LINK vhost 00:04:54.560 CC test/nvme/aer/aer.o 00:04:54.560 CC test/nvme/reserve/reserve.o 00:04:54.560 CC test/nvme/sgl/sgl.o 00:04:54.560 CC test/nvme/reset/reset.o 00:04:54.560 CC test/nvme/e2edp/nvme_dp.o 00:04:54.560 CC test/nvme/startup/startup.o 00:04:54.560 CC test/nvme/boot_partition/boot_partition.o 00:04:54.560 CC test/nvme/cuse/cuse.o 00:04:54.560 CC test/nvme/fused_ordering/fused_ordering.o 00:04:54.560 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:54.560 CC test/nvme/simple_copy/simple_copy.o 00:04:54.560 CC test/nvme/compliance/nvme_compliance.o 00:04:54.560 CC test/nvme/connect_stress/connect_stress.o 00:04:54.560 CC test/nvme/overhead/overhead.o 00:04:54.560 CC test/nvme/fdp/fdp.o 00:04:54.560 CC test/nvme/err_injection/err_injection.o 00:04:54.560 CC test/accel/dif/dif.o 00:04:54.560 CC test/blobfs/mkfs/mkfs.o 00:04:54.560 CC examples/accel/perf/accel_perf.o 00:04:54.560 CC examples/blob/cli/blobcli.o 00:04:54.819 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:54.819 CC examples/blob/hello_world/hello_blob.o 00:04:54.819 CC examples/nvme/abort/abort.o 00:04:54.819 CC examples/nvme/hello_world/hello_world.o 00:04:54.819 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:54.819 CC examples/nvme/arbitration/arbitration.o 00:04:54.819 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:54.819 CC test/lvol/esnap/esnap.o 00:04:54.819 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:54.819 CC examples/nvme/reconnect/reconnect.o 00:04:54.819 CC examples/nvme/hotplug/hotplug.o 00:04:54.819 LINK boot_partition 00:04:54.819 LINK reserve 00:04:54.819 LINK connect_stress 00:04:54.819 LINK err_injection 00:04:54.819 LINK doorbell_aers 00:04:54.819 LINK fused_ordering 00:04:54.819 LINK startup 00:04:54.819 LINK sgl 00:04:54.819 LINK mkfs 00:04:54.819 LINK aer 00:04:54.819 LINK nvme_dp 00:04:54.819 LINK overhead 00:04:54.819 LINK nvme_compliance 00:04:54.819 LINK cmb_copy 00:04:54.819 LINK simple_copy 00:04:54.819 LINK fdp 00:04:54.819 LINK reset 00:04:54.819 LINK hello_world 00:04:54.819 LINK hello_blob 00:04:54.819 LINK hotplug 00:04:54.819 LINK hello_fsdev 00:04:54.819 LINK pmr_persistence 00:04:55.079 LINK accel_perf 00:04:55.079 LINK blobcli 00:04:55.079 LINK arbitration 00:04:55.079 LINK reconnect 00:04:55.079 LINK nvme_manage 00:04:55.079 LINK abort 00:04:55.079 LINK dif 00:04:55.339 LINK iscsi_fuzz 00:04:55.339 CC examples/bdev/bdevperf/bdevperf.o 00:04:55.339 CC examples/bdev/hello_world/hello_bdev.o 00:04:55.339 CC test/bdev/bdevio/bdevio.o 00:04:55.339 LINK cuse 00:04:55.598 LINK hello_bdev 00:04:55.598 LINK bdevio 00:04:55.857 LINK bdevperf 00:04:56.116 CC examples/nvmf/nvmf/nvmf.o 00:04:56.375 LINK nvmf 00:04:57.751 LINK esnap 00:04:57.751 00:04:57.751 real 0m43.312s 00:04:57.751 user 5m17.898s 00:04:57.751 sys 2m39.043s 00:04:57.752 16:31:46 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:57.752 16:31:46 make -- common/autotest_common.sh@10 -- $ set +x 00:04:57.752 ************************************ 00:04:57.752 END TEST make 00:04:57.752 ************************************ 00:04:57.752 16:31:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:57.752 16:31:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:57.752 16:31:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:58.011 16:31:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.011 16:31:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:58.011 16:31:46 -- pm/common@44 -- $ pid=1865067 00:04:58.011 16:31:46 -- pm/common@50 -- $ kill -TERM 1865067 00:04:58.011 16:31:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.011 16:31:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:58.011 16:31:46 -- pm/common@44 -- $ pid=1865068 00:04:58.011 16:31:46 -- pm/common@50 -- $ kill -TERM 1865068 00:04:58.011 16:31:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.011 16:31:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:58.011 16:31:46 -- pm/common@44 -- $ pid=1865070 00:04:58.011 16:31:46 -- pm/common@50 -- $ kill -TERM 1865070 00:04:58.011 16:31:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.011 16:31:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:58.011 16:31:46 -- pm/common@44 -- $ pid=1865094 00:04:58.011 16:31:46 -- pm/common@50 -- $ sudo -E kill -TERM 1865094 00:04:58.011 16:31:46 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:58.011 16:31:46 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:58.011 16:31:46 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.011 16:31:46 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.011 16:31:46 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.011 16:31:46 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.011 16:31:46 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.011 16:31:46 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.011 16:31:46 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.011 16:31:46 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.011 16:31:46 -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.011 16:31:46 -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.011 16:31:46 -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.011 16:31:46 -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.011 16:31:46 -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.011 16:31:46 -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.011 16:31:46 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.011 16:31:46 -- scripts/common.sh@344 -- # case "$op" in 00:04:58.011 16:31:46 -- scripts/common.sh@345 -- # : 1 00:04:58.011 16:31:46 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.011 16:31:46 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.011 16:31:46 -- scripts/common.sh@365 -- # decimal 1 00:04:58.011 16:31:46 -- scripts/common.sh@353 -- # local d=1 00:04:58.011 16:31:46 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.011 16:31:46 -- scripts/common.sh@355 -- # echo 1 00:04:58.011 16:31:46 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.011 16:31:46 -- scripts/common.sh@366 -- # decimal 2 00:04:58.011 16:31:46 -- scripts/common.sh@353 -- # local d=2 00:04:58.011 16:31:46 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.011 16:31:46 -- scripts/common.sh@355 -- # echo 2 00:04:58.011 16:31:46 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.011 16:31:46 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.011 16:31:46 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.011 16:31:46 -- scripts/common.sh@368 -- # return 0 00:04:58.011 16:31:46 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.011 16:31:46 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.011 --rc genhtml_branch_coverage=1 00:04:58.011 --rc genhtml_function_coverage=1 00:04:58.011 --rc genhtml_legend=1 00:04:58.011 --rc geninfo_all_blocks=1 00:04:58.011 --rc geninfo_unexecuted_blocks=1 00:04:58.011 00:04:58.011 ' 00:04:58.011 16:31:46 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.011 --rc genhtml_branch_coverage=1 00:04:58.011 --rc genhtml_function_coverage=1 00:04:58.011 --rc genhtml_legend=1 00:04:58.011 --rc geninfo_all_blocks=1 00:04:58.011 --rc geninfo_unexecuted_blocks=1 00:04:58.011 00:04:58.011 ' 00:04:58.011 16:31:46 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.012 --rc genhtml_branch_coverage=1 00:04:58.012 --rc genhtml_function_coverage=1 00:04:58.012 --rc genhtml_legend=1 00:04:58.012 --rc geninfo_all_blocks=1 00:04:58.012 --rc geninfo_unexecuted_blocks=1 00:04:58.012 00:04:58.012 ' 00:04:58.012 16:31:46 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.012 --rc genhtml_branch_coverage=1 00:04:58.012 --rc genhtml_function_coverage=1 00:04:58.012 --rc genhtml_legend=1 00:04:58.012 --rc geninfo_all_blocks=1 00:04:58.012 --rc geninfo_unexecuted_blocks=1 00:04:58.012 00:04:58.012 ' 00:04:58.012 16:31:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:58.012 16:31:46 -- nvmf/common.sh@7 -- # uname -s 00:04:58.012 16:31:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.012 16:31:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.012 16:31:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.012 16:31:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.012 16:31:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.012 16:31:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.012 16:31:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.012 16:31:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.012 16:31:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.012 16:31:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.012 16:31:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:58.012 16:31:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:58.012 16:31:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.012 16:31:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.012 16:31:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:58.012 16:31:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.012 16:31:46 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:58.012 16:31:46 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.012 16:31:46 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.012 16:31:46 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.012 16:31:46 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.012 16:31:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.012 16:31:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.012 16:31:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.012 16:31:46 -- paths/export.sh@5 -- # export PATH 00:04:58.012 16:31:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.012 16:31:46 -- nvmf/common.sh@51 -- # : 0 00:04:58.012 16:31:46 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.012 16:31:46 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.012 16:31:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.012 16:31:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.012 16:31:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.012 16:31:46 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.012 16:31:46 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.012 16:31:46 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.012 16:31:46 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.012 16:31:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:58.012 16:31:46 -- spdk/autotest.sh@32 -- # uname -s 00:04:58.012 16:31:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:58.012 16:31:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:58.012 16:31:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:58.012 16:31:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:58.012 16:31:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:58.012 16:31:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:58.012 16:31:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:58.012 16:31:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:58.012 16:31:46 -- spdk/autotest.sh@48 -- # udevadm_pid=1942966 00:04:58.012 16:31:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:58.012 16:31:46 -- pm/common@17 -- # local monitor 00:04:58.012 16:31:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.012 16:31:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.012 16:31:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:58.012 16:31:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.012 16:31:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.012 16:31:46 -- pm/common@25 -- # sleep 1 00:04:58.012 16:31:46 -- pm/common@21 -- # date +%s 00:04:58.012 16:31:46 -- pm/common@21 -- # date +%s 00:04:58.012 16:31:46 -- pm/common@21 -- # date +%s 00:04:58.012 16:31:46 -- pm/common@21 -- # date +%s 00:04:58.012 16:31:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733499106 00:04:58.012 16:31:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733499106 00:04:58.012 16:31:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733499106 00:04:58.012 16:31:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733499106 00:04:58.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733499106_collect-cpu-load.pm.log 00:04:58.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733499106_collect-cpu-temp.pm.log 00:04:58.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733499106_collect-vmstat.pm.log 00:04:58.012 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733499106_collect-bmc-pm.bmc.pm.log 00:04:58.949 16:31:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:58.949 16:31:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:58.949 16:31:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.949 16:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:58.949 16:31:47 -- spdk/autotest.sh@59 -- # create_test_list 00:04:58.949 16:31:47 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:58.949 16:31:47 -- common/autotest_common.sh@10 -- # set +x 00:04:58.949 16:31:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:58.950 16:31:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.950 16:31:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.950 16:31:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:58.950 16:31:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.950 16:31:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:58.950 16:31:47 -- common/autotest_common.sh@1457 -- # uname 00:04:58.950 16:31:47 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:58.950 16:31:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:58.950 16:31:47 -- common/autotest_common.sh@1477 -- # uname 00:04:58.950 16:31:47 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:58.950 16:31:47 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:58.950 16:31:47 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:59.208 lcov: LCOV version 1.15 00:04:59.208 16:31:47 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:11.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:11.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:19.657 16:32:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:19.657 16:32:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.657 16:32:07 -- common/autotest_common.sh@10 -- # set +x 00:05:19.657 16:32:07 -- spdk/autotest.sh@78 -- # rm -f 00:05:19.657 16:32:07 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.036 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:05:21.036 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:05:21.295 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:05:21.295 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:05:21.295 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:05:21.295 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:05:21.295 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:05:21.295 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:05:21.295 0000:65:00.0 (144d a80a): Already using the nvme driver 00:05:21.295 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:05:21.296 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:05:21.296 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:05:21.296 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:05:21.296 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:05:21.296 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:05:21.296 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:05:21.296 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:05:21.556 16:32:10 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:21.556 16:32:10 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:21.556 16:32:10 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:21.556 16:32:10 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:21.556 16:32:10 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:21.556 16:32:10 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:21.556 16:32:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:21.556 16:32:10 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:05:21.556 16:32:10 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:21.556 16:32:10 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:21.556 16:32:10 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:21.556 16:32:10 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.556 16:32:10 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:21.556 16:32:10 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:21.556 16:32:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:21.556 16:32:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:21.556 16:32:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:21.556 16:32:10 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:21.556 16:32:10 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:21.815 No valid GPT data, bailing 00:05:21.815 16:32:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:21.815 16:32:10 -- scripts/common.sh@394 -- # pt= 00:05:21.815 16:32:10 -- scripts/common.sh@395 -- # return 1 00:05:21.815 16:32:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:21.815 1+0 records in 00:05:21.815 1+0 records out 00:05:21.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00199164 s, 526 MB/s 00:05:21.815 16:32:10 -- spdk/autotest.sh@105 -- # sync 00:05:21.815 16:32:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:21.815 16:32:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:21.815 16:32:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:27.125 16:32:15 -- spdk/autotest.sh@111 -- # uname -s 00:05:27.125 16:32:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:27.125 16:32:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:27.125 16:32:15 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:29.028 Hugepages 00:05:29.028 node hugesize free / total 00:05:29.028 node0 1048576kB 0 / 0 00:05:29.028 node0 2048kB 0 / 0 00:05:29.028 node1 1048576kB 0 / 0 00:05:29.028 node1 2048kB 0 / 0 00:05:29.028 00:05:29.028 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:29.029 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:29.029 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:29.029 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:29.029 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:29.029 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:29.029 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:29.029 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:29.029 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:29.288 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:29.288 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:29.288 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:29.288 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:29.288 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:29.288 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:29.288 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:29.288 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:29.288 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:29.288 16:32:17 -- spdk/autotest.sh@117 -- # uname -s 00:05:29.288 16:32:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:29.288 16:32:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:29.288 16:32:17 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:31.828 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:31.828 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:33.738 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:33.997 16:32:22 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:34.936 16:32:23 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:34.936 16:32:23 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:34.936 16:32:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:34.936 16:32:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:34.936 16:32:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:34.936 16:32:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:34.936 16:32:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:34.936 16:32:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:34.936 16:32:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:34.936 16:32:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:34.936 16:32:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:34.936 16:32:23 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:37.469 Waiting for block devices as requested 00:05:37.469 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:37.469 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:37.727 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:37.727 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:37.727 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:37.727 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:38.039 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:38.039 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:38.039 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:38.298 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:38.298 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:38.298 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:38.298 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:38.556 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:38.556 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:38.556 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:38.556 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:39.122 16:32:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:39.122 16:32:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:05:39.122 16:32:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:39.122 16:32:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:39.122 16:32:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:39.122 16:32:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:39.122 16:32:27 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:05:39.122 16:32:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:39.122 16:32:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:39.122 16:32:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:39.122 16:32:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:39.122 16:32:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:39.122 16:32:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:39.122 16:32:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:39.122 16:32:27 -- common/autotest_common.sh@1543 -- # continue 00:05:39.122 16:32:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:39.122 16:32:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.122 16:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:39.122 16:32:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:39.122 16:32:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.122 16:32:27 -- common/autotest_common.sh@10 -- # set +x 00:05:39.122 16:32:27 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:41.652 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:41.652 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:41.911 16:32:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:41.911 16:32:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.911 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:05:41.911 16:32:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:41.911 16:32:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:41.911 16:32:30 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:41.911 16:32:30 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:41.911 16:32:30 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:41.911 16:32:30 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:41.911 16:32:30 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:41.911 16:32:30 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:41.911 16:32:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:41.911 16:32:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:41.911 16:32:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.911 16:32:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:41.911 16:32:30 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:42.170 16:32:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:42.170 16:32:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:05:42.170 16:32:30 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.170 16:32:30 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:42.170 16:32:30 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:05:42.170 16:32:30 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:42.170 16:32:30 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:42.170 16:32:30 -- common/autotest_common.sh@1572 -- # return 0 00:05:42.170 16:32:30 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:42.170 16:32:30 -- common/autotest_common.sh@1580 -- # return 0 00:05:42.170 16:32:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:42.170 16:32:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:42.170 16:32:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.170 16:32:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.170 16:32:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:42.170 16:32:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.170 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 16:32:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:42.170 16:32:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:42.170 16:32:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.170 16:32:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.170 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:05:42.170 ************************************ 00:05:42.170 START TEST env 00:05:42.170 ************************************ 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:42.170 * Looking for test storage... 00:05:42.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.170 16:32:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.170 16:32:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.170 16:32:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.170 16:32:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.170 16:32:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.170 16:32:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.170 16:32:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.170 16:32:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.170 16:32:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.170 16:32:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.170 16:32:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.170 16:32:30 env -- scripts/common.sh@344 -- # case "$op" in 00:05:42.170 16:32:30 env -- scripts/common.sh@345 -- # : 1 00:05:42.170 16:32:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.170 16:32:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.170 16:32:30 env -- scripts/common.sh@365 -- # decimal 1 00:05:42.170 16:32:30 env -- scripts/common.sh@353 -- # local d=1 00:05:42.170 16:32:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.170 16:32:30 env -- scripts/common.sh@355 -- # echo 1 00:05:42.170 16:32:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.170 16:32:30 env -- scripts/common.sh@366 -- # decimal 2 00:05:42.170 16:32:30 env -- scripts/common.sh@353 -- # local d=2 00:05:42.170 16:32:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.170 16:32:30 env -- scripts/common.sh@355 -- # echo 2 00:05:42.170 16:32:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.170 16:32:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.170 16:32:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.170 16:32:30 env -- scripts/common.sh@368 -- # return 0 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.170 --rc genhtml_branch_coverage=1 00:05:42.170 --rc genhtml_function_coverage=1 00:05:42.170 --rc genhtml_legend=1 00:05:42.170 --rc geninfo_all_blocks=1 00:05:42.170 --rc geninfo_unexecuted_blocks=1 00:05:42.170 00:05:42.170 ' 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.170 --rc genhtml_branch_coverage=1 00:05:42.170 --rc genhtml_function_coverage=1 00:05:42.170 --rc genhtml_legend=1 00:05:42.170 --rc geninfo_all_blocks=1 00:05:42.170 --rc geninfo_unexecuted_blocks=1 00:05:42.170 00:05:42.170 ' 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.170 --rc genhtml_branch_coverage=1 00:05:42.170 --rc genhtml_function_coverage=1 00:05:42.170 --rc genhtml_legend=1 00:05:42.170 --rc geninfo_all_blocks=1 00:05:42.170 --rc geninfo_unexecuted_blocks=1 00:05:42.170 00:05:42.170 ' 00:05:42.170 16:32:30 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.170 --rc genhtml_branch_coverage=1 00:05:42.170 --rc genhtml_function_coverage=1 00:05:42.171 --rc genhtml_legend=1 00:05:42.171 --rc geninfo_all_blocks=1 00:05:42.171 --rc geninfo_unexecuted_blocks=1 00:05:42.171 00:05:42.171 ' 00:05:42.171 16:32:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:42.171 16:32:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.171 16:32:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.171 16:32:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 ************************************ 00:05:42.171 START TEST env_memory 00:05:42.171 ************************************ 00:05:42.171 16:32:30 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:42.430 00:05:42.430 00:05:42.430 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.430 http://cunit.sourceforge.net/ 00:05:42.430 00:05:42.430 00:05:42.430 Suite: memory 00:05:42.430 Test: alloc and free memory map ...[2024-12-06 16:32:30.894157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:42.430 passed 00:05:42.430 Test: mem map translation ...[2024-12-06 16:32:30.919749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:42.430 [2024-12-06 16:32:30.919799] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:42.430 [2024-12-06 16:32:30.919845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:42.430 [2024-12-06 16:32:30.919853] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:42.430 passed 00:05:42.430 Test: mem map registration ...[2024-12-06 16:32:30.975205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:42.430 [2024-12-06 16:32:30.975228] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:42.430 passed 00:05:42.430 Test: mem map adjacent registrations ...passed 00:05:42.430 00:05:42.430 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.430 suites 1 1 n/a 0 0 00:05:42.430 tests 4 4 4 0 0 00:05:42.430 asserts 152 152 152 0 n/a 00:05:42.430 00:05:42.430 Elapsed time = 0.182 seconds 00:05:42.430 00:05:42.430 real 0m0.191s 00:05:42.430 user 0m0.180s 00:05:42.430 sys 0m0.010s 00:05:42.430 16:32:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.430 16:32:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:42.430 ************************************ 00:05:42.430 END TEST env_memory 00:05:42.430 ************************************ 00:05:42.430 16:32:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.430 16:32:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.430 16:32:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.430 16:32:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.430 ************************************ 00:05:42.430 START TEST env_vtophys 00:05:42.430 ************************************ 00:05:42.430 16:32:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:42.430 EAL: lib.eal log level changed from notice to debug 00:05:42.430 EAL: Detected lcore 0 as core 0 on socket 0 00:05:42.430 EAL: Detected lcore 1 as core 1 on socket 0 00:05:42.430 EAL: Detected lcore 2 as core 2 on socket 0 00:05:42.430 EAL: Detected lcore 3 as core 3 on socket 0 00:05:42.430 EAL: Detected lcore 4 as core 4 on socket 0 00:05:42.430 EAL: Detected lcore 5 as core 5 on socket 0 00:05:42.430 EAL: Detected lcore 6 as core 6 on socket 0 00:05:42.430 EAL: Detected lcore 7 as core 7 on socket 0 00:05:42.430 EAL: Detected lcore 8 as core 8 on socket 0 00:05:42.430 EAL: Detected lcore 9 as core 9 on socket 0 00:05:42.430 EAL: Detected lcore 10 as core 10 on socket 0 00:05:42.430 EAL: Detected lcore 11 as core 11 on socket 0 00:05:42.430 EAL: Detected lcore 12 as core 12 on socket 0 00:05:42.430 EAL: Detected lcore 13 as core 13 on socket 0 00:05:42.431 EAL: Detected lcore 14 as core 14 on socket 0 00:05:42.431 EAL: Detected lcore 15 as core 15 on socket 0 00:05:42.431 EAL: Detected lcore 16 as core 16 on socket 0 00:05:42.431 EAL: Detected lcore 17 as core 17 on socket 0 00:05:42.431 EAL: Detected lcore 18 as core 18 on socket 0 00:05:42.431 EAL: Detected lcore 19 as core 19 on socket 0 00:05:42.431 EAL: Detected lcore 20 as core 20 on socket 0 00:05:42.431 EAL: Detected lcore 21 as core 21 on socket 0 00:05:42.431 EAL: Detected lcore 22 as core 22 on socket 0 00:05:42.431 EAL: Detected lcore 23 as core 23 on socket 0 00:05:42.431 EAL: Detected lcore 24 as core 24 on socket 0 00:05:42.431 EAL: Detected lcore 25 as core 25 on socket 0 00:05:42.431 EAL: Detected lcore 26 as core 26 on socket 0 00:05:42.431 EAL: Detected lcore 27 as core 27 on socket 0 00:05:42.431 EAL: Detected lcore 28 as core 28 on socket 0 00:05:42.431 EAL: Detected lcore 29 as core 29 on socket 0 00:05:42.431 EAL: Detected lcore 30 as core 30 on socket 0 00:05:42.431 EAL: Detected lcore 31 as core 31 on socket 0 00:05:42.431 EAL: Detected lcore 32 as core 32 on socket 0 00:05:42.431 EAL: Detected lcore 33 as core 33 on socket 0 00:05:42.431 EAL: Detected lcore 34 as core 34 on socket 0 00:05:42.431 EAL: Detected lcore 35 as core 35 on socket 0 00:05:42.431 EAL: Detected lcore 36 as core 0 on socket 1 00:05:42.431 EAL: Detected lcore 37 as core 1 on socket 1 00:05:42.431 EAL: Detected lcore 38 as core 2 on socket 1 00:05:42.431 EAL: Detected lcore 39 as core 3 on socket 1 00:05:42.431 EAL: Detected lcore 40 as core 4 on socket 1 00:05:42.431 EAL: Detected lcore 41 as core 5 on socket 1 00:05:42.431 EAL: Detected lcore 42 as core 6 on socket 1 00:05:42.431 EAL: Detected lcore 43 as core 7 on socket 1 00:05:42.431 EAL: Detected lcore 44 as core 8 on socket 1 00:05:42.431 EAL: Detected lcore 45 as core 9 on socket 1 00:05:42.431 EAL: Detected lcore 46 as core 10 on socket 1 00:05:42.431 EAL: Detected lcore 47 as core 11 on socket 1 00:05:42.431 EAL: Detected lcore 48 as core 12 on socket 1 00:05:42.431 EAL: Detected lcore 49 as core 13 on socket 1 00:05:42.431 EAL: Detected lcore 50 as core 14 on socket 1 00:05:42.431 EAL: Detected lcore 51 as core 15 on socket 1 00:05:42.431 EAL: Detected lcore 52 as core 16 on socket 1 00:05:42.431 EAL: Detected lcore 53 as core 17 on socket 1 00:05:42.431 EAL: Detected lcore 54 as core 18 on socket 1 00:05:42.431 EAL: Detected lcore 55 as core 19 on socket 1 00:05:42.431 EAL: Detected lcore 56 as core 20 on socket 1 00:05:42.431 EAL: Detected lcore 57 as core 21 on socket 1 00:05:42.431 EAL: Detected lcore 58 as core 22 on socket 1 00:05:42.431 EAL: Detected lcore 59 as core 23 on socket 1 00:05:42.431 EAL: Detected lcore 60 as core 24 on socket 1 00:05:42.431 EAL: Detected lcore 61 as core 25 on socket 1 00:05:42.431 EAL: Detected lcore 62 as core 26 on socket 1 00:05:42.431 EAL: Detected lcore 63 as core 27 on socket 1 00:05:42.431 EAL: Detected lcore 64 as core 28 on socket 1 00:05:42.431 EAL: Detected lcore 65 as core 29 on socket 1 00:05:42.431 EAL: Detected lcore 66 as core 30 on socket 1 00:05:42.431 EAL: Detected lcore 67 as core 31 on socket 1 00:05:42.431 EAL: Detected lcore 68 as core 32 on socket 1 00:05:42.431 EAL: Detected lcore 69 as core 33 on socket 1 00:05:42.431 EAL: Detected lcore 70 as core 34 on socket 1 00:05:42.431 EAL: Detected lcore 71 as core 35 on socket 1 00:05:42.431 EAL: Detected lcore 72 as core 0 on socket 0 00:05:42.431 EAL: Detected lcore 73 as core 1 on socket 0 00:05:42.431 EAL: Detected lcore 74 as core 2 on socket 0 00:05:42.431 EAL: Detected lcore 75 as core 3 on socket 0 00:05:42.431 EAL: Detected lcore 76 as core 4 on socket 0 00:05:42.431 EAL: Detected lcore 77 as core 5 on socket 0 00:05:42.431 EAL: Detected lcore 78 as core 6 on socket 0 00:05:42.431 EAL: Detected lcore 79 as core 7 on socket 0 00:05:42.431 EAL: Detected lcore 80 as core 8 on socket 0 00:05:42.431 EAL: Detected lcore 81 as core 9 on socket 0 00:05:42.431 EAL: Detected lcore 82 as core 10 on socket 0 00:05:42.431 EAL: Detected lcore 83 as core 11 on socket 0 00:05:42.431 EAL: Detected lcore 84 as core 12 on socket 0 00:05:42.431 EAL: Detected lcore 85 as core 13 on socket 0 00:05:42.431 EAL: Detected lcore 86 as core 14 on socket 0 00:05:42.431 EAL: Detected lcore 87 as core 15 on socket 0 00:05:42.431 EAL: Detected lcore 88 as core 16 on socket 0 00:05:42.431 EAL: Detected lcore 89 as core 17 on socket 0 00:05:42.431 EAL: Detected lcore 90 as core 18 on socket 0 00:05:42.431 EAL: Detected lcore 91 as core 19 on socket 0 00:05:42.431 EAL: Detected lcore 92 as core 20 on socket 0 00:05:42.431 EAL: Detected lcore 93 as core 21 on socket 0 00:05:42.431 EAL: Detected lcore 94 as core 22 on socket 0 00:05:42.431 EAL: Detected lcore 95 as core 23 on socket 0 00:05:42.431 EAL: Detected lcore 96 as core 24 on socket 0 00:05:42.431 EAL: Detected lcore 97 as core 25 on socket 0 00:05:42.431 EAL: Detected lcore 98 as core 26 on socket 0 00:05:42.431 EAL: Detected lcore 99 as core 27 on socket 0 00:05:42.431 EAL: Detected lcore 100 as core 28 on socket 0 00:05:42.431 EAL: Detected lcore 101 as core 29 on socket 0 00:05:42.431 EAL: Detected lcore 102 as core 30 on socket 0 00:05:42.431 EAL: Detected lcore 103 as core 31 on socket 0 00:05:42.431 EAL: Detected lcore 104 as core 32 on socket 0 00:05:42.431 EAL: Detected lcore 105 as core 33 on socket 0 00:05:42.431 EAL: Detected lcore 106 as core 34 on socket 0 00:05:42.431 EAL: Detected lcore 107 as core 35 on socket 0 00:05:42.431 EAL: Detected lcore 108 as core 0 on socket 1 00:05:42.431 EAL: Detected lcore 109 as core 1 on socket 1 00:05:42.431 EAL: Detected lcore 110 as core 2 on socket 1 00:05:42.431 EAL: Detected lcore 111 as core 3 on socket 1 00:05:42.431 EAL: Detected lcore 112 as core 4 on socket 1 00:05:42.431 EAL: Detected lcore 113 as core 5 on socket 1 00:05:42.431 EAL: Detected lcore 114 as core 6 on socket 1 00:05:42.431 EAL: Detected lcore 115 as core 7 on socket 1 00:05:42.431 EAL: Detected lcore 116 as core 8 on socket 1 00:05:42.431 EAL: Detected lcore 117 as core 9 on socket 1 00:05:42.431 EAL: Detected lcore 118 as core 10 on socket 1 00:05:42.431 EAL: Detected lcore 119 as core 11 on socket 1 00:05:42.431 EAL: Detected lcore 120 as core 12 on socket 1 00:05:42.431 EAL: Detected lcore 121 as core 13 on socket 1 00:05:42.431 EAL: Detected lcore 122 as core 14 on socket 1 00:05:42.431 EAL: Detected lcore 123 as core 15 on socket 1 00:05:42.431 EAL: Detected lcore 124 as core 16 on socket 1 00:05:42.431 EAL: Detected lcore 125 as core 17 on socket 1 00:05:42.431 EAL: Detected lcore 126 as core 18 on socket 1 00:05:42.431 EAL: Detected lcore 127 as core 19 on socket 1 00:05:42.431 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:42.431 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:42.431 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:42.431 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:42.431 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:42.431 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:42.431 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:42.431 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:42.431 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:42.431 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:42.431 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:42.431 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:42.431 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:42.431 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:42.431 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:42.432 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:42.432 EAL: Maximum logical cores by configuration: 128 00:05:42.432 EAL: Detected CPU lcores: 128 00:05:42.432 EAL: Detected NUMA nodes: 2 00:05:42.432 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:42.432 EAL: Detected shared linkage of DPDK 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:42.432 EAL: Registered [vdev] bus. 00:05:42.432 EAL: bus.vdev log level changed from disabled to notice 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:42.432 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:42.432 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:42.432 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:42.432 EAL: No shared files mode enabled, IPC will be disabled 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Bus pci wants IOVA as 'DC' 00:05:42.692 EAL: Bus vdev wants IOVA as 'DC' 00:05:42.692 EAL: Buses did not request a specific IOVA mode. 00:05:42.692 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:42.692 EAL: Selected IOVA mode 'VA' 00:05:42.692 EAL: Probing VFIO support... 00:05:42.692 EAL: IOMMU type 1 (Type 1) is supported 00:05:42.692 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:42.692 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:42.692 EAL: VFIO support initialized 00:05:42.692 EAL: Ask a virtual area of 0x2e000 bytes 00:05:42.692 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:42.692 EAL: Setting up physically contiguous memory... 00:05:42.692 EAL: Setting maximum number of open files to 524288 00:05:42.692 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:42.692 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:42.692 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:42.692 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:42.692 EAL: Ask a virtual area of 0x61000 bytes 00:05:42.692 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:42.692 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:42.692 EAL: Ask a virtual area of 0x400000000 bytes 00:05:42.692 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:42.692 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:42.692 EAL: Hugepages will be freed exactly as allocated. 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: TSC frequency is ~2400000 KHz 00:05:42.692 EAL: Main lcore 0 is ready (tid=7f853b3d0a00;cpuset=[0]) 00:05:42.692 EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 0 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 2MB 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:42.692 EAL: Mem event callback 'spdk:(nil)' registered 00:05:42.692 00:05:42.692 00:05:42.692 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.692 http://cunit.sourceforge.net/ 00:05:42.692 00:05:42.692 00:05:42.692 Suite: components_suite 00:05:42.692 Test: vtophys_malloc_test ...passed 00:05:42.692 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 4 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.692 EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 4 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.692 EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 4 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.692 EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 4 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.692 EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 4 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.692 EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 4 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.692 EAL: Trying to obtain current memory policy. 00:05:42.692 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.692 EAL: Restoring previous memory policy: 4 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.692 EAL: request: mp_malloc_sync 00:05:42.692 EAL: No shared files mode enabled, IPC is disabled 00:05:42.692 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.692 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.693 EAL: request: mp_malloc_sync 00:05:42.693 EAL: No shared files mode enabled, IPC is disabled 00:05:42.693 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.693 EAL: Trying to obtain current memory policy. 00:05:42.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.693 EAL: Restoring previous memory policy: 4 00:05:42.693 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.693 EAL: request: mp_malloc_sync 00:05:42.693 EAL: No shared files mode enabled, IPC is disabled 00:05:42.693 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.693 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.693 EAL: request: mp_malloc_sync 00:05:42.693 EAL: No shared files mode enabled, IPC is disabled 00:05:42.693 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.693 EAL: Trying to obtain current memory policy. 00:05:42.693 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.951 EAL: Restoring previous memory policy: 4 00:05:42.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.951 EAL: request: mp_malloc_sync 00:05:42.951 EAL: No shared files mode enabled, IPC is disabled 00:05:42.951 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.951 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.951 EAL: request: mp_malloc_sync 00:05:42.951 EAL: No shared files mode enabled, IPC is disabled 00:05:42.951 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.951 EAL: Trying to obtain current memory policy. 00:05:42.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.210 EAL: Restoring previous memory policy: 4 00:05:43.210 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.210 EAL: request: mp_malloc_sync 00:05:43.210 EAL: No shared files mode enabled, IPC is disabled 00:05:43.210 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.210 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.470 EAL: request: mp_malloc_sync 00:05:43.470 EAL: No shared files mode enabled, IPC is disabled 00:05:43.470 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.470 passed 00:05:43.470 00:05:43.470 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.470 suites 1 1 n/a 0 0 00:05:43.470 tests 2 2 2 0 0 00:05:43.470 asserts 497 497 497 0 n/a 00:05:43.470 00:05:43.470 Elapsed time = 0.688 seconds 00:05:43.470 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.470 EAL: request: mp_malloc_sync 00:05:43.470 EAL: No shared files mode enabled, IPC is disabled 00:05:43.470 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.470 EAL: No shared files mode enabled, IPC is disabled 00:05:43.470 EAL: No shared files mode enabled, IPC is disabled 00:05:43.470 EAL: No shared files mode enabled, IPC is disabled 00:05:43.470 00:05:43.470 real 0m0.817s 00:05:43.470 user 0m0.426s 00:05:43.470 sys 0m0.358s 00:05:43.470 16:32:31 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.470 16:32:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.470 ************************************ 00:05:43.470 END TEST env_vtophys 00:05:43.470 ************************************ 00:05:43.470 16:32:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.470 16:32:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.470 16:32:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.470 16:32:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.470 ************************************ 00:05:43.470 START TEST env_pci 00:05:43.470 ************************************ 00:05:43.470 16:32:31 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:43.470 00:05:43.470 00:05:43.470 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.470 http://cunit.sourceforge.net/ 00:05:43.470 00:05:43.470 00:05:43.470 Suite: pci 00:05:43.470 Test: pci_hook ...[2024-12-06 16:32:31.972399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1960222 has claimed it 00:05:43.470 EAL: Cannot find device (10000:00:01.0) 00:05:43.470 EAL: Failed to attach device on primary process 00:05:43.470 passed 00:05:43.470 00:05:43.470 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.470 suites 1 1 n/a 0 0 00:05:43.470 tests 1 1 1 0 0 00:05:43.470 asserts 25 25 25 0 n/a 00:05:43.470 00:05:43.470 Elapsed time = 0.024 seconds 00:05:43.470 00:05:43.470 real 0m0.034s 00:05:43.470 user 0m0.007s 00:05:43.470 sys 0m0.027s 00:05:43.470 16:32:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.470 16:32:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.470 ************************************ 00:05:43.470 END TEST env_pci 00:05:43.470 ************************************ 00:05:43.470 16:32:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.470 16:32:32 env -- env/env.sh@15 -- # uname 00:05:43.470 16:32:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.470 16:32:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.470 16:32:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.470 16:32:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:43.470 16:32:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.470 16:32:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.470 ************************************ 00:05:43.470 START TEST env_dpdk_post_init 00:05:43.470 ************************************ 00:05:43.470 16:32:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.470 EAL: Detected CPU lcores: 128 00:05:43.470 EAL: Detected NUMA nodes: 2 00:05:43.470 EAL: Detected shared linkage of DPDK 00:05:43.470 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.470 EAL: Selected IOVA mode 'VA' 00:05:43.470 EAL: VFIO support initialized 00:05:43.470 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.729 EAL: Using IOMMU type 1 (Type 1) 00:05:43.729 EAL: Ignore mapping IO port bar(1) 00:05:43.729 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:43.989 EAL: Ignore mapping IO port bar(1) 00:05:43.989 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:44.248 EAL: Ignore mapping IO port bar(1) 00:05:44.248 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:44.507 EAL: Ignore mapping IO port bar(1) 00:05:44.507 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:44.507 EAL: Ignore mapping IO port bar(1) 00:05:44.775 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:44.775 EAL: Ignore mapping IO port bar(1) 00:05:45.050 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:45.050 EAL: Ignore mapping IO port bar(1) 00:05:45.050 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:45.309 EAL: Ignore mapping IO port bar(1) 00:05:45.309 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:45.568 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:45.828 EAL: Ignore mapping IO port bar(1) 00:05:45.828 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:46.087 EAL: Ignore mapping IO port bar(1) 00:05:46.087 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:46.087 EAL: Ignore mapping IO port bar(1) 00:05:46.346 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:46.346 EAL: Ignore mapping IO port bar(1) 00:05:46.605 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:46.605 EAL: Ignore mapping IO port bar(1) 00:05:46.864 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:46.864 EAL: Ignore mapping IO port bar(1) 00:05:46.864 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:47.126 EAL: Ignore mapping IO port bar(1) 00:05:47.126 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:47.384 EAL: Ignore mapping IO port bar(1) 00:05:47.384 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:47.384 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:47.384 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:47.642 Starting DPDK initialization... 00:05:47.642 Starting SPDK post initialization... 00:05:47.642 SPDK NVMe probe 00:05:47.642 Attaching to 0000:65:00.0 00:05:47.642 Attached to 0000:65:00.0 00:05:47.642 Cleaning up... 00:05:49.542 00:05:49.542 real 0m5.718s 00:05:49.542 user 0m0.180s 00:05:49.542 sys 0m0.091s 00:05:49.542 16:32:37 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.542 16:32:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.542 ************************************ 00:05:49.542 END TEST env_dpdk_post_init 00:05:49.542 ************************************ 00:05:49.542 16:32:37 env -- env/env.sh@26 -- # uname 00:05:49.542 16:32:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:49.542 16:32:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.542 16:32:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.542 16:32:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.542 16:32:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.542 ************************************ 00:05:49.542 START TEST env_mem_callbacks 00:05:49.542 ************************************ 00:05:49.542 16:32:37 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:49.542 EAL: Detected CPU lcores: 128 00:05:49.542 EAL: Detected NUMA nodes: 2 00:05:49.542 EAL: Detected shared linkage of DPDK 00:05:49.542 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:49.542 EAL: Selected IOVA mode 'VA' 00:05:49.542 EAL: VFIO support initialized 00:05:49.542 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:49.542 00:05:49.542 00:05:49.542 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.542 http://cunit.sourceforge.net/ 00:05:49.542 00:05:49.542 00:05:49.543 Suite: memory 00:05:49.543 Test: test ... 00:05:49.543 register 0x200000200000 2097152 00:05:49.543 malloc 3145728 00:05:49.543 register 0x200000400000 4194304 00:05:49.543 buf 0x200000500000 len 3145728 PASSED 00:05:49.543 malloc 64 00:05:49.543 buf 0x2000004fff40 len 64 PASSED 00:05:49.543 malloc 4194304 00:05:49.543 register 0x200000800000 6291456 00:05:49.543 buf 0x200000a00000 len 4194304 PASSED 00:05:49.543 free 0x200000500000 3145728 00:05:49.543 free 0x2000004fff40 64 00:05:49.543 unregister 0x200000400000 4194304 PASSED 00:05:49.543 free 0x200000a00000 4194304 00:05:49.543 unregister 0x200000800000 6291456 PASSED 00:05:49.543 malloc 8388608 00:05:49.543 register 0x200000400000 10485760 00:05:49.543 buf 0x200000600000 len 8388608 PASSED 00:05:49.543 free 0x200000600000 8388608 00:05:49.543 unregister 0x200000400000 10485760 PASSED 00:05:49.543 passed 00:05:49.543 00:05:49.543 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.543 suites 1 1 n/a 0 0 00:05:49.543 tests 1 1 1 0 0 00:05:49.543 asserts 15 15 15 0 n/a 00:05:49.543 00:05:49.543 Elapsed time = 0.008 seconds 00:05:49.543 00:05:49.543 real 0m0.052s 00:05:49.543 user 0m0.009s 00:05:49.543 sys 0m0.043s 00:05:49.543 16:32:37 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.543 16:32:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:49.543 ************************************ 00:05:49.543 END TEST env_mem_callbacks 00:05:49.543 ************************************ 00:05:49.543 00:05:49.543 real 0m7.203s 00:05:49.543 user 0m0.962s 00:05:49.543 sys 0m0.777s 00:05:49.543 16:32:37 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.543 16:32:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.543 ************************************ 00:05:49.543 END TEST env 00:05:49.543 ************************************ 00:05:49.543 16:32:37 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:49.543 16:32:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.543 16:32:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.543 16:32:37 -- common/autotest_common.sh@10 -- # set +x 00:05:49.543 ************************************ 00:05:49.543 START TEST rpc 00:05:49.543 ************************************ 00:05:49.543 16:32:37 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:49.543 * Looking for test storage... 00:05:49.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.543 16:32:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.543 16:32:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.543 16:32:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.543 16:32:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.543 16:32:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.543 16:32:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.543 16:32:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.543 16:32:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.543 16:32:38 rpc -- scripts/common.sh@345 -- # : 1 00:05:49.543 16:32:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.543 16:32:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.543 16:32:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.543 16:32:38 rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.543 16:32:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.543 16:32:38 rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.543 16:32:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.543 16:32:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.543 16:32:38 rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.543 16:32:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.543 16:32:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.543 16:32:38 rpc -- scripts/common.sh@368 -- # return 0 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.543 --rc genhtml_branch_coverage=1 00:05:49.543 --rc genhtml_function_coverage=1 00:05:49.543 --rc genhtml_legend=1 00:05:49.543 --rc geninfo_all_blocks=1 00:05:49.543 --rc geninfo_unexecuted_blocks=1 00:05:49.543 00:05:49.543 ' 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.543 --rc genhtml_branch_coverage=1 00:05:49.543 --rc genhtml_function_coverage=1 00:05:49.543 --rc genhtml_legend=1 00:05:49.543 --rc geninfo_all_blocks=1 00:05:49.543 --rc geninfo_unexecuted_blocks=1 00:05:49.543 00:05:49.543 ' 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.543 --rc genhtml_branch_coverage=1 00:05:49.543 --rc genhtml_function_coverage=1 00:05:49.543 --rc genhtml_legend=1 00:05:49.543 --rc geninfo_all_blocks=1 00:05:49.543 --rc geninfo_unexecuted_blocks=1 00:05:49.543 00:05:49.543 ' 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:49.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.543 --rc genhtml_branch_coverage=1 00:05:49.543 --rc genhtml_function_coverage=1 00:05:49.543 --rc genhtml_legend=1 00:05:49.543 --rc geninfo_all_blocks=1 00:05:49.543 --rc geninfo_unexecuted_blocks=1 00:05:49.543 00:05:49.543 ' 00:05:49.543 16:32:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1961641 00:05:49.543 16:32:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.543 16:32:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1961641 00:05:49.543 16:32:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@835 -- # '[' -z 1961641 ']' 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.543 16:32:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.543 [2024-12-06 16:32:38.140459] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:05:49.543 [2024-12-06 16:32:38.140532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1961641 ] 00:05:49.543 [2024-12-06 16:32:38.224137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.802 [2024-12-06 16:32:38.252091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:49.802 [2024-12-06 16:32:38.252147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1961641' to capture a snapshot of events at runtime. 00:05:49.802 [2024-12-06 16:32:38.252156] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:49.802 [2024-12-06 16:32:38.252164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:49.802 [2024-12-06 16:32:38.252171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1961641 for offline analysis/debug. 00:05:49.802 [2024-12-06 16:32:38.252932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.370 16:32:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.370 16:32:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:50.371 16:32:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.371 16:32:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:50.371 16:32:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:50.371 16:32:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:50.371 16:32:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.371 16:32:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.371 16:32:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.371 ************************************ 00:05:50.371 START TEST rpc_integrity 00:05:50.371 ************************************ 00:05:50.371 16:32:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:50.371 16:32:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.371 16:32:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.371 16:32:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.371 16:32:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.371 16:32:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.371 16:32:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.371 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.371 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.371 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.371 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.371 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.371 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:50.371 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.371 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.371 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.371 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.371 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:50.371 { 00:05:50.371 "name": "Malloc0", 00:05:50.371 "aliases": [ 00:05:50.371 "f1fe4120-3a78-4187-b9f2-4809b5f7196e" 00:05:50.371 ], 00:05:50.371 "product_name": "Malloc disk", 00:05:50.371 "block_size": 512, 00:05:50.371 "num_blocks": 16384, 00:05:50.371 "uuid": "f1fe4120-3a78-4187-b9f2-4809b5f7196e", 00:05:50.371 "assigned_rate_limits": { 00:05:50.371 "rw_ios_per_sec": 0, 00:05:50.371 "rw_mbytes_per_sec": 0, 00:05:50.371 "r_mbytes_per_sec": 0, 00:05:50.371 "w_mbytes_per_sec": 0 00:05:50.371 }, 00:05:50.371 "claimed": false, 00:05:50.371 "zoned": false, 00:05:50.371 "supported_io_types": { 00:05:50.371 "read": true, 00:05:50.371 "write": true, 00:05:50.371 "unmap": true, 00:05:50.371 "flush": true, 00:05:50.371 "reset": true, 00:05:50.371 "nvme_admin": false, 00:05:50.371 "nvme_io": false, 00:05:50.371 "nvme_io_md": false, 00:05:50.371 "write_zeroes": true, 00:05:50.371 "zcopy": true, 00:05:50.371 "get_zone_info": false, 00:05:50.371 "zone_management": false, 00:05:50.371 "zone_append": false, 00:05:50.371 "compare": false, 00:05:50.371 "compare_and_write": false, 00:05:50.371 "abort": true, 00:05:50.371 "seek_hole": false, 00:05:50.371 "seek_data": false, 00:05:50.371 "copy": true, 00:05:50.371 "nvme_iov_md": false 00:05:50.371 }, 00:05:50.371 "memory_domains": [ 00:05:50.371 { 00:05:50.371 "dma_device_id": "system", 00:05:50.371 "dma_device_type": 1 00:05:50.371 }, 00:05:50.371 { 00:05:50.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.371 "dma_device_type": 2 00:05:50.371 } 00:05:50.371 ], 00:05:50.371 "driver_specific": {} 00:05:50.371 } 00:05:50.371 ]' 00:05:50.371 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 [2024-12-06 16:32:39.066260] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:50.632 [2024-12-06 16:32:39.066304] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.632 [2024-12-06 16:32:39.066320] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15a1e20 00:05:50.632 [2024-12-06 16:32:39.066328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.632 [2024-12-06 16:32:39.067889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.632 [2024-12-06 16:32:39.067926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:50.632 Passthru0 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:50.632 { 00:05:50.632 "name": "Malloc0", 00:05:50.632 "aliases": [ 00:05:50.632 "f1fe4120-3a78-4187-b9f2-4809b5f7196e" 00:05:50.632 ], 00:05:50.632 "product_name": "Malloc disk", 00:05:50.632 "block_size": 512, 00:05:50.632 "num_blocks": 16384, 00:05:50.632 "uuid": "f1fe4120-3a78-4187-b9f2-4809b5f7196e", 00:05:50.632 "assigned_rate_limits": { 00:05:50.632 "rw_ios_per_sec": 0, 00:05:50.632 "rw_mbytes_per_sec": 0, 00:05:50.632 "r_mbytes_per_sec": 0, 00:05:50.632 "w_mbytes_per_sec": 0 00:05:50.632 }, 00:05:50.632 "claimed": true, 00:05:50.632 "claim_type": "exclusive_write", 00:05:50.632 "zoned": false, 00:05:50.632 "supported_io_types": { 00:05:50.632 "read": true, 00:05:50.632 "write": true, 00:05:50.632 "unmap": true, 00:05:50.632 "flush": true, 00:05:50.632 "reset": true, 00:05:50.632 "nvme_admin": false, 00:05:50.632 "nvme_io": false, 00:05:50.632 "nvme_io_md": false, 00:05:50.632 "write_zeroes": true, 00:05:50.632 "zcopy": true, 00:05:50.632 "get_zone_info": false, 00:05:50.632 "zone_management": false, 00:05:50.632 "zone_append": false, 00:05:50.632 "compare": false, 00:05:50.632 "compare_and_write": false, 00:05:50.632 "abort": true, 00:05:50.632 "seek_hole": false, 00:05:50.632 "seek_data": false, 00:05:50.632 "copy": true, 00:05:50.632 "nvme_iov_md": false 00:05:50.632 }, 00:05:50.632 "memory_domains": [ 00:05:50.632 { 00:05:50.632 "dma_device_id": "system", 00:05:50.632 "dma_device_type": 1 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.632 "dma_device_type": 2 00:05:50.632 } 00:05:50.632 ], 00:05:50.632 "driver_specific": {} 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "name": "Passthru0", 00:05:50.632 "aliases": [ 00:05:50.632 "10f5cb48-aafe-551f-bb93-a00fae903600" 00:05:50.632 ], 00:05:50.632 "product_name": "passthru", 00:05:50.632 "block_size": 512, 00:05:50.632 "num_blocks": 16384, 00:05:50.632 "uuid": "10f5cb48-aafe-551f-bb93-a00fae903600", 00:05:50.632 "assigned_rate_limits": { 00:05:50.632 "rw_ios_per_sec": 0, 00:05:50.632 "rw_mbytes_per_sec": 0, 00:05:50.632 "r_mbytes_per_sec": 0, 00:05:50.632 "w_mbytes_per_sec": 0 00:05:50.632 }, 00:05:50.632 "claimed": false, 00:05:50.632 "zoned": false, 00:05:50.632 "supported_io_types": { 00:05:50.632 "read": true, 00:05:50.632 "write": true, 00:05:50.632 "unmap": true, 00:05:50.632 "flush": true, 00:05:50.632 "reset": true, 00:05:50.632 "nvme_admin": false, 00:05:50.632 "nvme_io": false, 00:05:50.632 "nvme_io_md": false, 00:05:50.632 "write_zeroes": true, 00:05:50.632 "zcopy": true, 00:05:50.632 "get_zone_info": false, 00:05:50.632 "zone_management": false, 00:05:50.632 "zone_append": false, 00:05:50.632 "compare": false, 00:05:50.632 "compare_and_write": false, 00:05:50.632 "abort": true, 00:05:50.632 "seek_hole": false, 00:05:50.632 "seek_data": false, 00:05:50.632 "copy": true, 00:05:50.632 "nvme_iov_md": false 00:05:50.632 }, 00:05:50.632 "memory_domains": [ 00:05:50.632 { 00:05:50.632 "dma_device_id": "system", 00:05:50.632 "dma_device_type": 1 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.632 "dma_device_type": 2 00:05:50.632 } 00:05:50.632 ], 00:05:50.632 "driver_specific": { 00:05:50.632 "passthru": { 00:05:50.632 "name": "Passthru0", 00:05:50.632 "base_bdev_name": "Malloc0" 00:05:50.632 } 00:05:50.632 } 00:05:50.632 } 00:05:50.632 ]' 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.632 16:32:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.632 00:05:50.632 real 0m0.205s 00:05:50.632 user 0m0.116s 00:05:50.632 sys 0m0.027s 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 ************************************ 00:05:50.632 END TEST rpc_integrity 00:05:50.632 ************************************ 00:05:50.632 16:32:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:50.632 16:32:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.632 16:32:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.632 16:32:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 ************************************ 00:05:50.632 START TEST rpc_plugins 00:05:50.632 ************************************ 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:50.632 { 00:05:50.632 "name": "Malloc1", 00:05:50.632 "aliases": [ 00:05:50.632 "ef1ad42f-c0aa-4225-a46f-b8d509991dea" 00:05:50.632 ], 00:05:50.632 "product_name": "Malloc disk", 00:05:50.632 "block_size": 4096, 00:05:50.632 "num_blocks": 256, 00:05:50.632 "uuid": "ef1ad42f-c0aa-4225-a46f-b8d509991dea", 00:05:50.632 "assigned_rate_limits": { 00:05:50.632 "rw_ios_per_sec": 0, 00:05:50.632 "rw_mbytes_per_sec": 0, 00:05:50.632 "r_mbytes_per_sec": 0, 00:05:50.632 "w_mbytes_per_sec": 0 00:05:50.632 }, 00:05:50.632 "claimed": false, 00:05:50.632 "zoned": false, 00:05:50.632 "supported_io_types": { 00:05:50.632 "read": true, 00:05:50.632 "write": true, 00:05:50.632 "unmap": true, 00:05:50.632 "flush": true, 00:05:50.632 "reset": true, 00:05:50.632 "nvme_admin": false, 00:05:50.632 "nvme_io": false, 00:05:50.632 "nvme_io_md": false, 00:05:50.632 "write_zeroes": true, 00:05:50.632 "zcopy": true, 00:05:50.632 "get_zone_info": false, 00:05:50.632 "zone_management": false, 00:05:50.632 "zone_append": false, 00:05:50.632 "compare": false, 00:05:50.632 "compare_and_write": false, 00:05:50.632 "abort": true, 00:05:50.632 "seek_hole": false, 00:05:50.632 "seek_data": false, 00:05:50.632 "copy": true, 00:05:50.632 "nvme_iov_md": false 00:05:50.632 }, 00:05:50.632 "memory_domains": [ 00:05:50.632 { 00:05:50.632 "dma_device_id": "system", 00:05:50.632 "dma_device_type": 1 00:05:50.632 }, 00:05:50.632 { 00:05:50.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.632 "dma_device_type": 2 00:05:50.632 } 00:05:50.632 ], 00:05:50.632 "driver_specific": {} 00:05:50.632 } 00:05:50.632 ]' 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.632 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:50.632 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:50.892 16:32:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:50.892 00:05:50.892 real 0m0.107s 00:05:50.892 user 0m0.054s 00:05:50.892 sys 0m0.017s 00:05:50.892 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.892 16:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.892 ************************************ 00:05:50.892 END TEST rpc_plugins 00:05:50.892 ************************************ 00:05:50.892 16:32:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:50.892 16:32:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.892 16:32:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.892 16:32:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.892 ************************************ 00:05:50.892 START TEST rpc_trace_cmd_test 00:05:50.892 ************************************ 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:50.892 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1961641", 00:05:50.892 "tpoint_group_mask": "0x8", 00:05:50.892 "iscsi_conn": { 00:05:50.892 "mask": "0x2", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "scsi": { 00:05:50.892 "mask": "0x4", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "bdev": { 00:05:50.892 "mask": "0x8", 00:05:50.892 "tpoint_mask": "0xffffffffffffffff" 00:05:50.892 }, 00:05:50.892 "nvmf_rdma": { 00:05:50.892 "mask": "0x10", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "nvmf_tcp": { 00:05:50.892 "mask": "0x20", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "ftl": { 00:05:50.892 "mask": "0x40", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "blobfs": { 00:05:50.892 "mask": "0x80", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "dsa": { 00:05:50.892 "mask": "0x200", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "thread": { 00:05:50.892 "mask": "0x400", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "nvme_pcie": { 00:05:50.892 "mask": "0x800", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "iaa": { 00:05:50.892 "mask": "0x1000", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "nvme_tcp": { 00:05:50.892 "mask": "0x2000", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "bdev_nvme": { 00:05:50.892 "mask": "0x4000", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "sock": { 00:05:50.892 "mask": "0x8000", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "blob": { 00:05:50.892 "mask": "0x10000", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "bdev_raid": { 00:05:50.892 "mask": "0x20000", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 }, 00:05:50.892 "scheduler": { 00:05:50.892 "mask": "0x40000", 00:05:50.892 "tpoint_mask": "0x0" 00:05:50.892 } 00:05:50.892 }' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:50.892 00:05:50.892 real 0m0.154s 00:05:50.892 user 0m0.124s 00:05:50.892 sys 0m0.021s 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.892 16:32:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.892 ************************************ 00:05:50.892 END TEST rpc_trace_cmd_test 00:05:50.892 ************************************ 00:05:50.892 16:32:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:50.892 16:32:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:50.892 16:32:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:50.892 16:32:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.892 16:32:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.892 16:32:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.892 ************************************ 00:05:50.892 START TEST rpc_daemon_integrity 00:05:50.892 ************************************ 00:05:50.892 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.153 { 00:05:51.153 "name": "Malloc2", 00:05:51.153 "aliases": [ 00:05:51.153 "4e36b13b-80f2-4564-a8b5-396d3f196130" 00:05:51.153 ], 00:05:51.153 "product_name": "Malloc disk", 00:05:51.153 "block_size": 512, 00:05:51.153 "num_blocks": 16384, 00:05:51.153 "uuid": "4e36b13b-80f2-4564-a8b5-396d3f196130", 00:05:51.153 "assigned_rate_limits": { 00:05:51.153 "rw_ios_per_sec": 0, 00:05:51.153 "rw_mbytes_per_sec": 0, 00:05:51.153 "r_mbytes_per_sec": 0, 00:05:51.153 "w_mbytes_per_sec": 0 00:05:51.153 }, 00:05:51.153 "claimed": false, 00:05:51.153 "zoned": false, 00:05:51.153 "supported_io_types": { 00:05:51.153 "read": true, 00:05:51.153 "write": true, 00:05:51.153 "unmap": true, 00:05:51.153 "flush": true, 00:05:51.153 "reset": true, 00:05:51.153 "nvme_admin": false, 00:05:51.153 "nvme_io": false, 00:05:51.153 "nvme_io_md": false, 00:05:51.153 "write_zeroes": true, 00:05:51.153 "zcopy": true, 00:05:51.153 "get_zone_info": false, 00:05:51.153 "zone_management": false, 00:05:51.153 "zone_append": false, 00:05:51.153 "compare": false, 00:05:51.153 "compare_and_write": false, 00:05:51.153 "abort": true, 00:05:51.153 "seek_hole": false, 00:05:51.153 "seek_data": false, 00:05:51.153 "copy": true, 00:05:51.153 "nvme_iov_md": false 00:05:51.153 }, 00:05:51.153 "memory_domains": [ 00:05:51.153 { 00:05:51.153 "dma_device_id": "system", 00:05:51.153 "dma_device_type": 1 00:05:51.153 }, 00:05:51.153 { 00:05:51.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.153 "dma_device_type": 2 00:05:51.153 } 00:05:51.153 ], 00:05:51.153 "driver_specific": {} 00:05:51.153 } 00:05:51.153 ]' 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 [2024-12-06 16:32:39.679908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.153 [2024-12-06 16:32:39.679951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.153 [2024-12-06 16:32:39.679967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16d20f0 00:05:51.153 [2024-12-06 16:32:39.679974] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.153 [2024-12-06 16:32:39.681439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.153 [2024-12-06 16:32:39.681476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.153 Passthru0 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.153 { 00:05:51.153 "name": "Malloc2", 00:05:51.153 "aliases": [ 00:05:51.153 "4e36b13b-80f2-4564-a8b5-396d3f196130" 00:05:51.153 ], 00:05:51.153 "product_name": "Malloc disk", 00:05:51.153 "block_size": 512, 00:05:51.153 "num_blocks": 16384, 00:05:51.153 "uuid": "4e36b13b-80f2-4564-a8b5-396d3f196130", 00:05:51.153 "assigned_rate_limits": { 00:05:51.153 "rw_ios_per_sec": 0, 00:05:51.153 "rw_mbytes_per_sec": 0, 00:05:51.153 "r_mbytes_per_sec": 0, 00:05:51.153 "w_mbytes_per_sec": 0 00:05:51.153 }, 00:05:51.153 "claimed": true, 00:05:51.153 "claim_type": "exclusive_write", 00:05:51.153 "zoned": false, 00:05:51.153 "supported_io_types": { 00:05:51.153 "read": true, 00:05:51.153 "write": true, 00:05:51.153 "unmap": true, 00:05:51.153 "flush": true, 00:05:51.153 "reset": true, 00:05:51.153 "nvme_admin": false, 00:05:51.153 "nvme_io": false, 00:05:51.153 "nvme_io_md": false, 00:05:51.153 "write_zeroes": true, 00:05:51.153 "zcopy": true, 00:05:51.153 "get_zone_info": false, 00:05:51.153 "zone_management": false, 00:05:51.153 "zone_append": false, 00:05:51.153 "compare": false, 00:05:51.153 "compare_and_write": false, 00:05:51.153 "abort": true, 00:05:51.153 "seek_hole": false, 00:05:51.153 "seek_data": false, 00:05:51.153 "copy": true, 00:05:51.153 "nvme_iov_md": false 00:05:51.153 }, 00:05:51.153 "memory_domains": [ 00:05:51.153 { 00:05:51.153 "dma_device_id": "system", 00:05:51.153 "dma_device_type": 1 00:05:51.153 }, 00:05:51.153 { 00:05:51.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.153 "dma_device_type": 2 00:05:51.153 } 00:05:51.153 ], 00:05:51.153 "driver_specific": {} 00:05:51.153 }, 00:05:51.153 { 00:05:51.153 "name": "Passthru0", 00:05:51.153 "aliases": [ 00:05:51.153 "5fbf0e2b-386e-5d2c-867f-f5f0fb244cda" 00:05:51.153 ], 00:05:51.153 "product_name": "passthru", 00:05:51.153 "block_size": 512, 00:05:51.153 "num_blocks": 16384, 00:05:51.153 "uuid": "5fbf0e2b-386e-5d2c-867f-f5f0fb244cda", 00:05:51.153 "assigned_rate_limits": { 00:05:51.153 "rw_ios_per_sec": 0, 00:05:51.153 "rw_mbytes_per_sec": 0, 00:05:51.153 "r_mbytes_per_sec": 0, 00:05:51.153 "w_mbytes_per_sec": 0 00:05:51.153 }, 00:05:51.153 "claimed": false, 00:05:51.153 "zoned": false, 00:05:51.153 "supported_io_types": { 00:05:51.153 "read": true, 00:05:51.153 "write": true, 00:05:51.153 "unmap": true, 00:05:51.153 "flush": true, 00:05:51.153 "reset": true, 00:05:51.153 "nvme_admin": false, 00:05:51.153 "nvme_io": false, 00:05:51.153 "nvme_io_md": false, 00:05:51.153 "write_zeroes": true, 00:05:51.153 "zcopy": true, 00:05:51.153 "get_zone_info": false, 00:05:51.153 "zone_management": false, 00:05:51.153 "zone_append": false, 00:05:51.153 "compare": false, 00:05:51.153 "compare_and_write": false, 00:05:51.153 "abort": true, 00:05:51.153 "seek_hole": false, 00:05:51.153 "seek_data": false, 00:05:51.153 "copy": true, 00:05:51.153 "nvme_iov_md": false 00:05:51.153 }, 00:05:51.153 "memory_domains": [ 00:05:51.153 { 00:05:51.153 "dma_device_id": "system", 00:05:51.153 "dma_device_type": 1 00:05:51.153 }, 00:05:51.153 { 00:05:51.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.153 "dma_device_type": 2 00:05:51.153 } 00:05:51.153 ], 00:05:51.153 "driver_specific": { 00:05:51.153 "passthru": { 00:05:51.153 "name": "Passthru0", 00:05:51.153 "base_bdev_name": "Malloc2" 00:05:51.153 } 00:05:51.153 } 00:05:51.153 } 00:05:51.153 ]' 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.153 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.154 00:05:51.154 real 0m0.200s 00:05:51.154 user 0m0.111s 00:05:51.154 sys 0m0.033s 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.154 16:32:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.154 ************************************ 00:05:51.154 END TEST rpc_daemon_integrity 00:05:51.154 ************************************ 00:05:51.154 16:32:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:51.154 16:32:39 rpc -- rpc/rpc.sh@84 -- # killprocess 1961641 00:05:51.154 16:32:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 1961641 ']' 00:05:51.154 16:32:39 rpc -- common/autotest_common.sh@958 -- # kill -0 1961641 00:05:51.154 16:32:39 rpc -- common/autotest_common.sh@959 -- # uname 00:05:51.154 16:32:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.154 16:32:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1961641 00:05:51.413 16:32:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.413 16:32:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.413 16:32:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1961641' 00:05:51.413 killing process with pid 1961641 00:05:51.413 16:32:39 rpc -- common/autotest_common.sh@973 -- # kill 1961641 00:05:51.413 16:32:39 rpc -- common/autotest_common.sh@978 -- # wait 1961641 00:05:51.413 00:05:51.414 real 0m2.129s 00:05:51.414 user 0m2.570s 00:05:51.414 sys 0m0.650s 00:05:51.414 16:32:40 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.414 16:32:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.414 ************************************ 00:05:51.414 END TEST rpc 00:05:51.414 ************************************ 00:05:51.674 16:32:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:51.674 16:32:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.674 16:32:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.674 16:32:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.674 ************************************ 00:05:51.674 START TEST skip_rpc 00:05:51.674 ************************************ 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:51.674 * Looking for test storage... 00:05:51.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.674 16:32:40 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.674 --rc genhtml_branch_coverage=1 00:05:51.674 --rc genhtml_function_coverage=1 00:05:51.674 --rc genhtml_legend=1 00:05:51.674 --rc geninfo_all_blocks=1 00:05:51.674 --rc geninfo_unexecuted_blocks=1 00:05:51.674 00:05:51.674 ' 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.674 --rc genhtml_branch_coverage=1 00:05:51.674 --rc genhtml_function_coverage=1 00:05:51.674 --rc genhtml_legend=1 00:05:51.674 --rc geninfo_all_blocks=1 00:05:51.674 --rc geninfo_unexecuted_blocks=1 00:05:51.674 00:05:51.674 ' 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.674 --rc genhtml_branch_coverage=1 00:05:51.674 --rc genhtml_function_coverage=1 00:05:51.674 --rc genhtml_legend=1 00:05:51.674 --rc geninfo_all_blocks=1 00:05:51.674 --rc geninfo_unexecuted_blocks=1 00:05:51.674 00:05:51.674 ' 00:05:51.674 16:32:40 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.674 --rc genhtml_branch_coverage=1 00:05:51.674 --rc genhtml_function_coverage=1 00:05:51.674 --rc genhtml_legend=1 00:05:51.674 --rc geninfo_all_blocks=1 00:05:51.674 --rc geninfo_unexecuted_blocks=1 00:05:51.675 00:05:51.675 ' 00:05:51.675 16:32:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:51.675 16:32:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:51.675 16:32:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:51.675 16:32:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.675 16:32:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.675 16:32:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.675 ************************************ 00:05:51.675 START TEST skip_rpc 00:05:51.675 ************************************ 00:05:51.675 16:32:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:51.675 16:32:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1962203 00:05:51.675 16:32:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.675 16:32:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:51.675 16:32:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:51.675 [2024-12-06 16:32:40.324653] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:05:51.675 [2024-12-06 16:32:40.324714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962203 ] 00:05:51.934 [2024-12-06 16:32:40.389333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.934 [2024-12-06 16:32:40.406129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1962203 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1962203 ']' 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1962203 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1962203 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1962203' 00:05:57.238 killing process with pid 1962203 00:05:57.238 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1962203 00:05:57.239 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1962203 00:05:57.239 00:05:57.239 real 0m5.232s 00:05:57.239 user 0m5.054s 00:05:57.239 sys 0m0.207s 00:05:57.239 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.239 16:32:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.239 ************************************ 00:05:57.239 END TEST skip_rpc 00:05:57.239 ************************************ 00:05:57.239 16:32:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:57.239 16:32:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.239 16:32:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.239 16:32:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.239 ************************************ 00:05:57.239 START TEST skip_rpc_with_json 00:05:57.239 ************************************ 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1963440 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1963440 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1963440 ']' 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.239 [2024-12-06 16:32:45.597644] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:05:57.239 [2024-12-06 16:32:45.597692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963440 ] 00:05:57.239 [2024-12-06 16:32:45.662143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.239 [2024-12-06 16:32:45.679300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.239 [2024-12-06 16:32:45.831732] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:57.239 request: 00:05:57.239 { 00:05:57.239 "trtype": "tcp", 00:05:57.239 "method": "nvmf_get_transports", 00:05:57.239 "req_id": 1 00:05:57.239 } 00:05:57.239 Got JSON-RPC error response 00:05:57.239 response: 00:05:57.239 { 00:05:57.239 "code": -19, 00:05:57.239 "message": "No such device" 00:05:57.239 } 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.239 [2024-12-06 16:32:45.839820] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.239 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.499 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.499 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:57.499 { 00:05:57.499 "subsystems": [ 00:05:57.499 { 00:05:57.499 "subsystem": "fsdev", 00:05:57.499 "config": [ 00:05:57.500 { 00:05:57.500 "method": "fsdev_set_opts", 00:05:57.500 "params": { 00:05:57.500 "fsdev_io_pool_size": 65535, 00:05:57.500 "fsdev_io_cache_size": 256 00:05:57.500 } 00:05:57.500 } 00:05:57.500 ] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "vfio_user_target", 00:05:57.500 "config": null 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "keyring", 00:05:57.500 "config": [] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "iobuf", 00:05:57.500 "config": [ 00:05:57.500 { 00:05:57.500 "method": "iobuf_set_options", 00:05:57.500 "params": { 00:05:57.500 "small_pool_count": 8192, 00:05:57.500 "large_pool_count": 1024, 00:05:57.500 "small_bufsize": 8192, 00:05:57.500 "large_bufsize": 135168, 00:05:57.500 "enable_numa": false 00:05:57.500 } 00:05:57.500 } 00:05:57.500 ] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "sock", 00:05:57.500 "config": [ 00:05:57.500 { 00:05:57.500 "method": "sock_set_default_impl", 00:05:57.500 "params": { 00:05:57.500 "impl_name": "posix" 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "sock_impl_set_options", 00:05:57.500 "params": { 00:05:57.500 "impl_name": "ssl", 00:05:57.500 "recv_buf_size": 4096, 00:05:57.500 "send_buf_size": 4096, 00:05:57.500 "enable_recv_pipe": true, 00:05:57.500 "enable_quickack": false, 00:05:57.500 "enable_placement_id": 0, 00:05:57.500 "enable_zerocopy_send_server": true, 00:05:57.500 "enable_zerocopy_send_client": false, 00:05:57.500 "zerocopy_threshold": 0, 00:05:57.500 "tls_version": 0, 00:05:57.500 "enable_ktls": false 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "sock_impl_set_options", 00:05:57.500 "params": { 00:05:57.500 "impl_name": "posix", 00:05:57.500 "recv_buf_size": 2097152, 00:05:57.500 "send_buf_size": 2097152, 00:05:57.500 "enable_recv_pipe": true, 00:05:57.500 "enable_quickack": false, 00:05:57.500 "enable_placement_id": 0, 00:05:57.500 "enable_zerocopy_send_server": true, 00:05:57.500 "enable_zerocopy_send_client": false, 00:05:57.500 "zerocopy_threshold": 0, 00:05:57.500 "tls_version": 0, 00:05:57.500 "enable_ktls": false 00:05:57.500 } 00:05:57.500 } 00:05:57.500 ] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "vmd", 00:05:57.500 "config": [] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "accel", 00:05:57.500 "config": [ 00:05:57.500 { 00:05:57.500 "method": "accel_set_options", 00:05:57.500 "params": { 00:05:57.500 "small_cache_size": 128, 00:05:57.500 "large_cache_size": 16, 00:05:57.500 "task_count": 2048, 00:05:57.500 "sequence_count": 2048, 00:05:57.500 "buf_count": 2048 00:05:57.500 } 00:05:57.500 } 00:05:57.500 ] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "bdev", 00:05:57.500 "config": [ 00:05:57.500 { 00:05:57.500 "method": "bdev_set_options", 00:05:57.500 "params": { 00:05:57.500 "bdev_io_pool_size": 65535, 00:05:57.500 "bdev_io_cache_size": 256, 00:05:57.500 "bdev_auto_examine": true, 00:05:57.500 "iobuf_small_cache_size": 128, 00:05:57.500 "iobuf_large_cache_size": 16 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "bdev_raid_set_options", 00:05:57.500 "params": { 00:05:57.500 "process_window_size_kb": 1024, 00:05:57.500 "process_max_bandwidth_mb_sec": 0 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "bdev_iscsi_set_options", 00:05:57.500 "params": { 00:05:57.500 "timeout_sec": 30 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "bdev_nvme_set_options", 00:05:57.500 "params": { 00:05:57.500 "action_on_timeout": "none", 00:05:57.500 "timeout_us": 0, 00:05:57.500 "timeout_admin_us": 0, 00:05:57.500 "keep_alive_timeout_ms": 10000, 00:05:57.500 "arbitration_burst": 0, 00:05:57.500 "low_priority_weight": 0, 00:05:57.500 "medium_priority_weight": 0, 00:05:57.500 "high_priority_weight": 0, 00:05:57.500 "nvme_adminq_poll_period_us": 10000, 00:05:57.500 "nvme_ioq_poll_period_us": 0, 00:05:57.500 "io_queue_requests": 0, 00:05:57.500 "delay_cmd_submit": true, 00:05:57.500 "transport_retry_count": 4, 00:05:57.500 "bdev_retry_count": 3, 00:05:57.500 "transport_ack_timeout": 0, 00:05:57.500 "ctrlr_loss_timeout_sec": 0, 00:05:57.500 "reconnect_delay_sec": 0, 00:05:57.500 "fast_io_fail_timeout_sec": 0, 00:05:57.500 "disable_auto_failback": false, 00:05:57.500 "generate_uuids": false, 00:05:57.500 "transport_tos": 0, 00:05:57.500 "nvme_error_stat": false, 00:05:57.500 "rdma_srq_size": 0, 00:05:57.500 "io_path_stat": false, 00:05:57.500 "allow_accel_sequence": false, 00:05:57.500 "rdma_max_cq_size": 0, 00:05:57.500 "rdma_cm_event_timeout_ms": 0, 00:05:57.500 "dhchap_digests": [ 00:05:57.500 "sha256", 00:05:57.500 "sha384", 00:05:57.500 "sha512" 00:05:57.500 ], 00:05:57.500 "dhchap_dhgroups": [ 00:05:57.500 "null", 00:05:57.500 "ffdhe2048", 00:05:57.500 "ffdhe3072", 00:05:57.500 "ffdhe4096", 00:05:57.500 "ffdhe6144", 00:05:57.500 "ffdhe8192" 00:05:57.500 ] 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "bdev_nvme_set_hotplug", 00:05:57.500 "params": { 00:05:57.500 "period_us": 100000, 00:05:57.500 "enable": false 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "bdev_wait_for_examine" 00:05:57.500 } 00:05:57.500 ] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "scsi", 00:05:57.500 "config": null 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "scheduler", 00:05:57.500 "config": [ 00:05:57.500 { 00:05:57.500 "method": "framework_set_scheduler", 00:05:57.500 "params": { 00:05:57.500 "name": "static" 00:05:57.500 } 00:05:57.500 } 00:05:57.500 ] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "vhost_scsi", 00:05:57.500 "config": [] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "vhost_blk", 00:05:57.500 "config": [] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "ublk", 00:05:57.500 "config": [] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "nbd", 00:05:57.500 "config": [] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "nvmf", 00:05:57.500 "config": [ 00:05:57.500 { 00:05:57.500 "method": "nvmf_set_config", 00:05:57.500 "params": { 00:05:57.500 "discovery_filter": "match_any", 00:05:57.500 "admin_cmd_passthru": { 00:05:57.500 "identify_ctrlr": false 00:05:57.500 }, 00:05:57.500 "dhchap_digests": [ 00:05:57.500 "sha256", 00:05:57.500 "sha384", 00:05:57.500 "sha512" 00:05:57.500 ], 00:05:57.500 "dhchap_dhgroups": [ 00:05:57.500 "null", 00:05:57.500 "ffdhe2048", 00:05:57.500 "ffdhe3072", 00:05:57.500 "ffdhe4096", 00:05:57.500 "ffdhe6144", 00:05:57.500 "ffdhe8192" 00:05:57.500 ] 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "nvmf_set_max_subsystems", 00:05:57.500 "params": { 00:05:57.500 "max_subsystems": 1024 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "nvmf_set_crdt", 00:05:57.500 "params": { 00:05:57.500 "crdt1": 0, 00:05:57.500 "crdt2": 0, 00:05:57.500 "crdt3": 0 00:05:57.500 } 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "method": "nvmf_create_transport", 00:05:57.500 "params": { 00:05:57.500 "trtype": "TCP", 00:05:57.500 "max_queue_depth": 128, 00:05:57.500 "max_io_qpairs_per_ctrlr": 127, 00:05:57.500 "in_capsule_data_size": 4096, 00:05:57.500 "max_io_size": 131072, 00:05:57.500 "io_unit_size": 131072, 00:05:57.500 "max_aq_depth": 128, 00:05:57.500 "num_shared_buffers": 511, 00:05:57.500 "buf_cache_size": 4294967295, 00:05:57.500 "dif_insert_or_strip": false, 00:05:57.500 "zcopy": false, 00:05:57.500 "c2h_success": true, 00:05:57.500 "sock_priority": 0, 00:05:57.500 "abort_timeout_sec": 1, 00:05:57.500 "ack_timeout": 0, 00:05:57.500 "data_wr_pool_size": 0 00:05:57.500 } 00:05:57.500 } 00:05:57.500 ] 00:05:57.500 }, 00:05:57.500 { 00:05:57.500 "subsystem": "iscsi", 00:05:57.500 "config": [ 00:05:57.500 { 00:05:57.500 "method": "iscsi_set_options", 00:05:57.500 "params": { 00:05:57.500 "node_base": "iqn.2016-06.io.spdk", 00:05:57.500 "max_sessions": 128, 00:05:57.500 "max_connections_per_session": 2, 00:05:57.500 "max_queue_depth": 64, 00:05:57.500 "default_time2wait": 2, 00:05:57.500 "default_time2retain": 20, 00:05:57.500 "first_burst_length": 8192, 00:05:57.500 "immediate_data": true, 00:05:57.500 "allow_duplicated_isid": false, 00:05:57.500 "error_recovery_level": 0, 00:05:57.500 "nop_timeout": 60, 00:05:57.500 "nop_in_interval": 30, 00:05:57.500 "disable_chap": false, 00:05:57.500 "require_chap": false, 00:05:57.500 "mutual_chap": false, 00:05:57.501 "chap_group": 0, 00:05:57.501 "max_large_datain_per_connection": 64, 00:05:57.501 "max_r2t_per_connection": 4, 00:05:57.501 "pdu_pool_size": 36864, 00:05:57.501 "immediate_data_pool_size": 16384, 00:05:57.501 "data_out_pool_size": 2048 00:05:57.501 } 00:05:57.501 } 00:05:57.501 ] 00:05:57.501 } 00:05:57.501 ] 00:05:57.501 } 00:05:57.501 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:57.501 16:32:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1963440 00:05:57.501 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1963440 ']' 00:05:57.501 16:32:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1963440 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963440 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963440' 00:05:57.501 killing process with pid 1963440 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1963440 00:05:57.501 16:32:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1963440 00:05:57.761 16:32:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1963575 00:05:57.761 16:32:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:57.761 16:32:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1963575 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1963575 ']' 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1963575 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1963575 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1963575' 00:06:03.062 killing process with pid 1963575 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1963575 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1963575 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:03.062 00:06:03.062 real 0m5.897s 00:06:03.062 user 0m5.689s 00:06:03.062 sys 0m0.446s 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 END TEST skip_rpc_with_json 00:06:03.062 ************************************ 00:06:03.062 16:32:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:03.062 16:32:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.062 16:32:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.062 16:32:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 START TEST skip_rpc_with_delay 00:06:03.062 ************************************ 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.062 [2024-12-06 16:32:51.543037] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.062 00:06:03.062 real 0m0.054s 00:06:03.062 user 0m0.035s 00:06:03.062 sys 0m0.018s 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.062 16:32:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 END TEST skip_rpc_with_delay 00:06:03.062 ************************************ 00:06:03.062 16:32:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.062 16:32:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.062 16:32:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.062 16:32:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.062 16:32:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.062 16:32:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 ************************************ 00:06:03.062 START TEST exit_on_failed_rpc_init 00:06:03.062 ************************************ 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1964866 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1964866 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1964866 ']' 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.062 16:32:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.062 [2024-12-06 16:32:51.645678] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:03.063 [2024-12-06 16:32:51.645737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1964866 ] 00:06:03.063 [2024-12-06 16:32:51.715472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.063 [2024-12-06 16:32:51.736921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.322 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.323 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.323 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:03.323 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.323 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:03.323 16:32:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:03.323 [2024-12-06 16:32:51.933789] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:03.323 [2024-12-06 16:32:51.933840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1964960 ] 00:06:03.323 [2024-12-06 16:32:52.010103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.582 [2024-12-06 16:32:52.028130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.582 [2024-12-06 16:32:52.028178] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:03.582 [2024-12-06 16:32:52.028188] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:03.582 [2024-12-06 16:32:52.028195] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1964866 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1964866 ']' 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1964866 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1964866 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.582 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1964866' 00:06:03.582 killing process with pid 1964866 00:06:03.583 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1964866 00:06:03.583 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1964866 00:06:03.842 00:06:03.842 real 0m0.678s 00:06:03.842 user 0m0.728s 00:06:03.842 sys 0m0.309s 00:06:03.842 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.842 16:32:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.842 ************************************ 00:06:03.842 END TEST exit_on_failed_rpc_init 00:06:03.842 ************************************ 00:06:03.842 16:32:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:03.842 00:06:03.842 real 0m12.171s 00:06:03.842 user 0m11.651s 00:06:03.842 sys 0m1.159s 00:06:03.842 16:32:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.842 16:32:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.842 ************************************ 00:06:03.842 END TEST skip_rpc 00:06:03.842 ************************************ 00:06:03.842 16:32:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.842 16:32:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.842 16:32:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.842 16:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:03.842 ************************************ 00:06:03.842 START TEST rpc_client 00:06:03.842 ************************************ 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:03.842 * Looking for test storage... 00:06:03.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.842 16:32:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.842 --rc genhtml_branch_coverage=1 00:06:03.842 --rc genhtml_function_coverage=1 00:06:03.842 --rc genhtml_legend=1 00:06:03.842 --rc geninfo_all_blocks=1 00:06:03.842 --rc geninfo_unexecuted_blocks=1 00:06:03.842 00:06:03.842 ' 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.842 --rc genhtml_branch_coverage=1 00:06:03.842 --rc genhtml_function_coverage=1 00:06:03.842 --rc genhtml_legend=1 00:06:03.842 --rc geninfo_all_blocks=1 00:06:03.842 --rc geninfo_unexecuted_blocks=1 00:06:03.842 00:06:03.842 ' 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.842 --rc genhtml_branch_coverage=1 00:06:03.842 --rc genhtml_function_coverage=1 00:06:03.842 --rc genhtml_legend=1 00:06:03.842 --rc geninfo_all_blocks=1 00:06:03.842 --rc geninfo_unexecuted_blocks=1 00:06:03.842 00:06:03.842 ' 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.842 --rc genhtml_branch_coverage=1 00:06:03.842 --rc genhtml_function_coverage=1 00:06:03.842 --rc genhtml_legend=1 00:06:03.842 --rc geninfo_all_blocks=1 00:06:03.842 --rc geninfo_unexecuted_blocks=1 00:06:03.842 00:06:03.842 ' 00:06:03.842 16:32:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:03.842 OK 00:06:03.842 16:32:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:03.842 00:06:03.842 real 0m0.141s 00:06:03.842 user 0m0.077s 00:06:03.842 sys 0m0.070s 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.842 16:32:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:03.842 ************************************ 00:06:03.842 END TEST rpc_client 00:06:03.842 ************************************ 00:06:03.842 16:32:52 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:03.842 16:32:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.842 16:32:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.842 16:32:52 -- common/autotest_common.sh@10 -- # set +x 00:06:04.103 ************************************ 00:06:04.103 START TEST json_config 00:06:04.103 ************************************ 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.103 16:32:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.103 16:32:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.103 16:32:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.103 16:32:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.103 16:32:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.103 16:32:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.103 16:32:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.103 16:32:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:04.103 16:32:52 json_config -- scripts/common.sh@345 -- # : 1 00:06:04.103 16:32:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.103 16:32:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.103 16:32:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:04.103 16:32:52 json_config -- scripts/common.sh@353 -- # local d=1 00:06:04.103 16:32:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.103 16:32:52 json_config -- scripts/common.sh@355 -- # echo 1 00:06:04.103 16:32:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.103 16:32:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@353 -- # local d=2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.103 16:32:52 json_config -- scripts/common.sh@355 -- # echo 2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.103 16:32:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.103 16:32:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.103 16:32:52 json_config -- scripts/common.sh@368 -- # return 0 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.103 --rc genhtml_branch_coverage=1 00:06:04.103 --rc genhtml_function_coverage=1 00:06:04.103 --rc genhtml_legend=1 00:06:04.103 --rc geninfo_all_blocks=1 00:06:04.103 --rc geninfo_unexecuted_blocks=1 00:06:04.103 00:06:04.103 ' 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.103 --rc genhtml_branch_coverage=1 00:06:04.103 --rc genhtml_function_coverage=1 00:06:04.103 --rc genhtml_legend=1 00:06:04.103 --rc geninfo_all_blocks=1 00:06:04.103 --rc geninfo_unexecuted_blocks=1 00:06:04.103 00:06:04.103 ' 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.103 --rc genhtml_branch_coverage=1 00:06:04.103 --rc genhtml_function_coverage=1 00:06:04.103 --rc genhtml_legend=1 00:06:04.103 --rc geninfo_all_blocks=1 00:06:04.103 --rc geninfo_unexecuted_blocks=1 00:06:04.103 00:06:04.103 ' 00:06:04.103 16:32:52 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.103 --rc genhtml_branch_coverage=1 00:06:04.103 --rc genhtml_function_coverage=1 00:06:04.103 --rc genhtml_legend=1 00:06:04.103 --rc geninfo_all_blocks=1 00:06:04.103 --rc geninfo_unexecuted_blocks=1 00:06:04.103 00:06:04.103 ' 00:06:04.103 16:32:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.103 16:32:52 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.103 16:32:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.103 16:32:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.103 16:32:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.103 16:32:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.103 16:32:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.103 16:32:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.103 16:32:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.103 16:32:52 json_config -- paths/export.sh@5 -- # export PATH 00:06:04.104 16:32:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@51 -- # : 0 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.104 16:32:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:04.104 INFO: JSON configuration test init 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.104 16:32:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:04.104 16:32:52 json_config -- json_config/common.sh@9 -- # local app=target 00:06:04.104 16:32:52 json_config -- json_config/common.sh@10 -- # shift 00:06:04.104 16:32:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:04.104 16:32:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:04.104 16:32:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:04.104 16:32:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.104 16:32:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:04.104 16:32:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1965102 00:06:04.104 16:32:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:04.104 Waiting for target to run... 00:06:04.104 16:32:52 json_config -- json_config/common.sh@25 -- # waitforlisten 1965102 /var/tmp/spdk_tgt.sock 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 1965102 ']' 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.104 16:32:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.104 16:32:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:04.104 [2024-12-06 16:32:52.735527] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:04.104 [2024-12-06 16:32:52.735599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965102 ] 00:06:04.364 [2024-12-06 16:32:53.055134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.625 [2024-12-06 16:32:53.064229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.884 16:32:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.884 16:32:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:04.884 16:32:53 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.884 00:06:04.884 16:32:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:04.884 16:32:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:04.884 16:32:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.884 16:32:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.884 16:32:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:04.884 16:32:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:04.884 16:32:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.884 16:32:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.884 16:32:53 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:04.884 16:32:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:04.884 16:32:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:05.451 16:32:54 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:05.451 16:32:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:05.451 16:32:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.451 16:32:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.452 16:32:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:05.452 16:32:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:05.452 16:32:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:05.452 16:32:54 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:05.452 16:32:54 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:05.452 16:32:54 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:05.452 16:32:54 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:05.452 16:32:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@54 -- # sort 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:05.710 16:32:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.710 16:32:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:05.710 16:32:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.710 16:32:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.710 16:32:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:05.710 MallocForNvmf0 00:06:05.710 16:32:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.968 16:32:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:05.968 MallocForNvmf1 00:06:05.968 16:32:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:05.969 16:32:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:06.227 [2024-12-06 16:32:54.700303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:06.227 16:32:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:06.227 16:32:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:06.227 16:32:54 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.227 16:32:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:06.486 16:32:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.486 16:32:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:06.744 16:32:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.744 16:32:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:06.744 [2024-12-06 16:32:55.330246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:06.744 16:32:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:06.744 16:32:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.744 16:32:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.744 16:32:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:06.744 16:32:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.744 16:32:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.744 16:32:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:06.745 16:32:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:06.745 16:32:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:07.003 MallocBdevForConfigChangeCheck 00:06:07.003 16:32:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:07.003 16:32:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.003 16:32:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.003 16:32:55 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:07.003 16:32:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.261 16:32:55 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:07.261 INFO: shutting down applications... 00:06:07.261 16:32:55 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:07.261 16:32:55 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:07.261 16:32:55 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:07.261 16:32:55 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.831 Calling clear_iscsi_subsystem 00:06:07.831 Calling clear_nvmf_subsystem 00:06:07.831 Calling clear_nbd_subsystem 00:06:07.831 Calling clear_ublk_subsystem 00:06:07.831 Calling clear_vhost_blk_subsystem 00:06:07.831 Calling clear_vhost_scsi_subsystem 00:06:07.831 Calling clear_bdev_subsystem 00:06:07.831 16:32:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:07.831 16:32:56 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:07.831 16:32:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:07.831 16:32:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.831 16:32:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.831 16:32:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:08.091 16:32:56 json_config -- json_config/json_config.sh@352 -- # break 00:06:08.091 16:32:56 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:08.091 16:32:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:08.091 16:32:56 json_config -- json_config/common.sh@31 -- # local app=target 00:06:08.091 16:32:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.091 16:32:56 json_config -- json_config/common.sh@35 -- # [[ -n 1965102 ]] 00:06:08.091 16:32:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1965102 00:06:08.091 16:32:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.091 16:32:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.091 16:32:56 json_config -- json_config/common.sh@41 -- # kill -0 1965102 00:06:08.091 16:32:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.661 16:32:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.661 16:32:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.661 16:32:57 json_config -- json_config/common.sh@41 -- # kill -0 1965102 00:06:08.661 16:32:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.661 16:32:57 json_config -- json_config/common.sh@43 -- # break 00:06:08.661 16:32:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.661 16:32:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.661 SPDK target shutdown done 00:06:08.661 16:32:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:08.661 INFO: relaunching applications... 00:06:08.661 16:32:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.661 16:32:57 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.661 16:32:57 json_config -- json_config/common.sh@10 -- # shift 00:06:08.661 16:32:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.661 16:32:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.661 16:32:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.661 16:32:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.661 16:32:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.661 16:32:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1966228 00:06:08.661 16:32:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.661 Waiting for target to run... 00:06:08.661 16:32:57 json_config -- json_config/common.sh@25 -- # waitforlisten 1966228 /var/tmp/spdk_tgt.sock 00:06:08.661 16:32:57 json_config -- common/autotest_common.sh@835 -- # '[' -z 1966228 ']' 00:06:08.661 16:32:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.661 16:32:57 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.661 16:32:57 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.661 16:32:57 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.661 16:32:57 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.661 16:32:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.661 [2024-12-06 16:32:57.128002] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:08.661 [2024-12-06 16:32:57.128082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966228 ] 00:06:08.969 [2024-12-06 16:32:57.461996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.969 [2024-12-06 16:32:57.476087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.276 [2024-12-06 16:32:57.950867] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.535 [2024-12-06 16:32:57.983237] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:09.535 16:32:58 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.535 16:32:58 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:09.535 16:32:58 json_config -- json_config/common.sh@26 -- # echo '' 00:06:09.535 00:06:09.535 16:32:58 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:09.535 16:32:58 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:09.535 INFO: Checking if target configuration is the same... 00:06:09.535 16:32:58 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.535 16:32:58 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:09.535 16:32:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:09.535 + '[' 2 -ne 2 ']' 00:06:09.535 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:09.535 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:09.535 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:09.535 +++ basename /dev/fd/62 00:06:09.535 ++ mktemp /tmp/62.XXX 00:06:09.535 + tmp_file_1=/tmp/62.hpB 00:06:09.535 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:09.535 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:09.535 + tmp_file_2=/tmp/spdk_tgt_config.json.yJ4 00:06:09.535 + ret=0 00:06:09.535 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:09.796 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:09.796 + diff -u /tmp/62.hpB /tmp/spdk_tgt_config.json.yJ4 00:06:09.796 + echo 'INFO: JSON config files are the same' 00:06:09.796 INFO: JSON config files are the same 00:06:09.796 + rm /tmp/62.hpB /tmp/spdk_tgt_config.json.yJ4 00:06:09.796 + exit 0 00:06:09.796 16:32:58 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:09.796 16:32:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:09.796 INFO: changing configuration and checking if this can be detected... 00:06:09.796 16:32:58 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:09.796 16:32:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:10.055 16:32:58 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.055 16:32:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:10.055 16:32:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.055 + '[' 2 -ne 2 ']' 00:06:10.055 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:10.055 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:10.055 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:10.055 +++ basename /dev/fd/62 00:06:10.055 ++ mktemp /tmp/62.XXX 00:06:10.055 + tmp_file_1=/tmp/62.Cb3 00:06:10.055 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.055 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:10.055 + tmp_file_2=/tmp/spdk_tgt_config.json.1rf 00:06:10.055 + ret=0 00:06:10.055 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.315 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:10.315 + diff -u /tmp/62.Cb3 /tmp/spdk_tgt_config.json.1rf 00:06:10.315 + ret=1 00:06:10.315 + echo '=== Start of file: /tmp/62.Cb3 ===' 00:06:10.315 + cat /tmp/62.Cb3 00:06:10.315 + echo '=== End of file: /tmp/62.Cb3 ===' 00:06:10.315 + echo '' 00:06:10.315 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1rf ===' 00:06:10.315 + cat /tmp/spdk_tgt_config.json.1rf 00:06:10.315 + echo '=== End of file: /tmp/spdk_tgt_config.json.1rf ===' 00:06:10.315 + echo '' 00:06:10.315 + rm /tmp/62.Cb3 /tmp/spdk_tgt_config.json.1rf 00:06:10.315 + exit 1 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:10.315 INFO: configuration change detected. 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 1966228 ]] 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.315 16:32:58 json_config -- json_config/json_config.sh@330 -- # killprocess 1966228 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 1966228 ']' 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@958 -- # kill -0 1966228 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@959 -- # uname 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1966228 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1966228' 00:06:10.315 killing process with pid 1966228 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@973 -- # kill 1966228 00:06:10.315 16:32:58 json_config -- common/autotest_common.sh@978 -- # wait 1966228 00:06:10.574 16:32:59 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.574 16:32:59 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:10.574 16:32:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:10.574 16:32:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.574 16:32:59 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:10.574 16:32:59 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:10.574 INFO: Success 00:06:10.574 00:06:10.574 real 0m6.642s 00:06:10.574 user 0m7.960s 00:06:10.574 sys 0m1.443s 00:06:10.574 16:32:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.574 16:32:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.574 ************************************ 00:06:10.574 END TEST json_config 00:06:10.574 ************************************ 00:06:10.574 16:32:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.574 16:32:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.574 16:32:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.574 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:06:10.574 ************************************ 00:06:10.574 START TEST json_config_extra_key 00:06:10.574 ************************************ 00:06:10.574 16:32:59 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.835 16:32:59 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.835 16:32:59 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.835 16:32:59 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.835 16:32:59 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.835 16:32:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.836 --rc genhtml_branch_coverage=1 00:06:10.836 --rc genhtml_function_coverage=1 00:06:10.836 --rc genhtml_legend=1 00:06:10.836 --rc geninfo_all_blocks=1 00:06:10.836 --rc geninfo_unexecuted_blocks=1 00:06:10.836 00:06:10.836 ' 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.836 --rc genhtml_branch_coverage=1 00:06:10.836 --rc genhtml_function_coverage=1 00:06:10.836 --rc genhtml_legend=1 00:06:10.836 --rc geninfo_all_blocks=1 00:06:10.836 --rc geninfo_unexecuted_blocks=1 00:06:10.836 00:06:10.836 ' 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.836 --rc genhtml_branch_coverage=1 00:06:10.836 --rc genhtml_function_coverage=1 00:06:10.836 --rc genhtml_legend=1 00:06:10.836 --rc geninfo_all_blocks=1 00:06:10.836 --rc geninfo_unexecuted_blocks=1 00:06:10.836 00:06:10.836 ' 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.836 --rc genhtml_branch_coverage=1 00:06:10.836 --rc genhtml_function_coverage=1 00:06:10.836 --rc genhtml_legend=1 00:06:10.836 --rc geninfo_all_blocks=1 00:06:10.836 --rc geninfo_unexecuted_blocks=1 00:06:10.836 00:06:10.836 ' 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.836 16:32:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.836 16:32:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.836 16:32:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.836 16:32:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.836 16:32:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.836 16:32:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.836 16:32:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.836 INFO: launching applications... 00:06:10.836 16:32:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1967017 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.836 Waiting for target to run... 00:06:10.836 16:32:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1967017 /var/tmp/spdk_tgt.sock 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1967017 ']' 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.836 16:32:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.837 16:32:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.837 16:32:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.837 16:32:59 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.837 [2024-12-06 16:32:59.399652] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:10.837 [2024-12-06 16:32:59.399721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967017 ] 00:06:11.096 [2024-12-06 16:32:59.741531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.097 [2024-12-06 16:32:59.756069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.663 16:33:00 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.663 16:33:00 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.663 00:06:11.663 16:33:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.663 INFO: shutting down applications... 00:06:11.663 16:33:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1967017 ]] 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1967017 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1967017 00:06:11.663 16:33:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.230 16:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.230 16:33:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.230 16:33:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1967017 00:06:12.230 16:33:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.230 16:33:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.230 16:33:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.230 16:33:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.230 SPDK target shutdown done 00:06:12.230 16:33:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.230 Success 00:06:12.230 00:06:12.230 real 0m1.451s 00:06:12.230 user 0m1.029s 00:06:12.230 sys 0m0.413s 00:06:12.230 16:33:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.230 16:33:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 ************************************ 00:06:12.230 END TEST json_config_extra_key 00:06:12.230 ************************************ 00:06:12.230 16:33:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.230 16:33:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.230 16:33:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.230 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 ************************************ 00:06:12.230 START TEST alias_rpc 00:06:12.230 ************************************ 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.230 * Looking for test storage... 00:06:12.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.230 16:33:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.230 --rc genhtml_branch_coverage=1 00:06:12.230 --rc genhtml_function_coverage=1 00:06:12.230 --rc genhtml_legend=1 00:06:12.230 --rc geninfo_all_blocks=1 00:06:12.230 --rc geninfo_unexecuted_blocks=1 00:06:12.230 00:06:12.230 ' 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.230 --rc genhtml_branch_coverage=1 00:06:12.230 --rc genhtml_function_coverage=1 00:06:12.230 --rc genhtml_legend=1 00:06:12.230 --rc geninfo_all_blocks=1 00:06:12.230 --rc geninfo_unexecuted_blocks=1 00:06:12.230 00:06:12.230 ' 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.230 --rc genhtml_branch_coverage=1 00:06:12.230 --rc genhtml_function_coverage=1 00:06:12.230 --rc genhtml_legend=1 00:06:12.230 --rc geninfo_all_blocks=1 00:06:12.230 --rc geninfo_unexecuted_blocks=1 00:06:12.230 00:06:12.230 ' 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.230 --rc genhtml_branch_coverage=1 00:06:12.230 --rc genhtml_function_coverage=1 00:06:12.230 --rc genhtml_legend=1 00:06:12.230 --rc geninfo_all_blocks=1 00:06:12.230 --rc geninfo_unexecuted_blocks=1 00:06:12.230 00:06:12.230 ' 00:06:12.230 16:33:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.230 16:33:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1967465 00:06:12.230 16:33:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1967465 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1967465 ']' 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.230 16:33:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.230 16:33:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.230 [2024-12-06 16:33:00.906913] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:12.230 [2024-12-06 16:33:00.906983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967465 ] 00:06:12.489 [2024-12-06 16:33:00.975432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.489 [2024-12-06 16:33:00.993207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.489 16:33:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.489 16:33:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.489 16:33:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:12.747 16:33:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1967465 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1967465 ']' 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1967465 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967465 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.747 16:33:01 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967465' 00:06:12.747 killing process with pid 1967465 00:06:12.748 16:33:01 alias_rpc -- common/autotest_common.sh@973 -- # kill 1967465 00:06:12.748 16:33:01 alias_rpc -- common/autotest_common.sh@978 -- # wait 1967465 00:06:13.006 00:06:13.006 real 0m0.811s 00:06:13.006 user 0m0.840s 00:06:13.006 sys 0m0.322s 00:06:13.006 16:33:01 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.006 16:33:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.006 ************************************ 00:06:13.006 END TEST alias_rpc 00:06:13.006 ************************************ 00:06:13.006 16:33:01 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:13.006 16:33:01 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.006 16:33:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.006 16:33:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.006 16:33:01 -- common/autotest_common.sh@10 -- # set +x 00:06:13.006 ************************************ 00:06:13.006 START TEST spdkcli_tcp 00:06:13.006 ************************************ 00:06:13.006 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.006 * Looking for test storage... 00:06:13.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:13.006 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.006 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.006 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.265 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.265 16:33:01 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.266 16:33:01 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.266 --rc genhtml_branch_coverage=1 00:06:13.266 --rc genhtml_function_coverage=1 00:06:13.266 --rc genhtml_legend=1 00:06:13.266 --rc geninfo_all_blocks=1 00:06:13.266 --rc geninfo_unexecuted_blocks=1 00:06:13.266 00:06:13.266 ' 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.266 --rc genhtml_branch_coverage=1 00:06:13.266 --rc genhtml_function_coverage=1 00:06:13.266 --rc genhtml_legend=1 00:06:13.266 --rc geninfo_all_blocks=1 00:06:13.266 --rc geninfo_unexecuted_blocks=1 00:06:13.266 00:06:13.266 ' 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.266 --rc genhtml_branch_coverage=1 00:06:13.266 --rc genhtml_function_coverage=1 00:06:13.266 --rc genhtml_legend=1 00:06:13.266 --rc geninfo_all_blocks=1 00:06:13.266 --rc geninfo_unexecuted_blocks=1 00:06:13.266 00:06:13.266 ' 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.266 --rc genhtml_branch_coverage=1 00:06:13.266 --rc genhtml_function_coverage=1 00:06:13.266 --rc genhtml_legend=1 00:06:13.266 --rc geninfo_all_blocks=1 00:06:13.266 --rc geninfo_unexecuted_blocks=1 00:06:13.266 00:06:13.266 ' 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1967587 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1967587 00:06:13.266 16:33:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1967587 ']' 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.266 16:33:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.266 [2024-12-06 16:33:01.767203] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:13.266 [2024-12-06 16:33:01.767257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967587 ] 00:06:13.266 [2024-12-06 16:33:01.835186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.266 [2024-12-06 16:33:01.855050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.266 [2024-12-06 16:33:01.855052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.525 16:33:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.525 16:33:02 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:13.525 16:33:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1967746 00:06:13.525 16:33:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:13.525 16:33:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:13.525 [ 00:06:13.525 "bdev_malloc_delete", 00:06:13.525 "bdev_malloc_create", 00:06:13.525 "bdev_null_resize", 00:06:13.525 "bdev_null_delete", 00:06:13.525 "bdev_null_create", 00:06:13.525 "bdev_nvme_cuse_unregister", 00:06:13.525 "bdev_nvme_cuse_register", 00:06:13.525 "bdev_opal_new_user", 00:06:13.525 "bdev_opal_set_lock_state", 00:06:13.525 "bdev_opal_delete", 00:06:13.525 "bdev_opal_get_info", 00:06:13.525 "bdev_opal_create", 00:06:13.525 "bdev_nvme_opal_revert", 00:06:13.525 "bdev_nvme_opal_init", 00:06:13.525 "bdev_nvme_send_cmd", 00:06:13.525 "bdev_nvme_set_keys", 00:06:13.525 "bdev_nvme_get_path_iostat", 00:06:13.525 "bdev_nvme_get_mdns_discovery_info", 00:06:13.525 "bdev_nvme_stop_mdns_discovery", 00:06:13.525 "bdev_nvme_start_mdns_discovery", 00:06:13.525 "bdev_nvme_set_multipath_policy", 00:06:13.525 "bdev_nvme_set_preferred_path", 00:06:13.525 "bdev_nvme_get_io_paths", 00:06:13.525 "bdev_nvme_remove_error_injection", 00:06:13.525 "bdev_nvme_add_error_injection", 00:06:13.525 "bdev_nvme_get_discovery_info", 00:06:13.525 "bdev_nvme_stop_discovery", 00:06:13.525 "bdev_nvme_start_discovery", 00:06:13.525 "bdev_nvme_get_controller_health_info", 00:06:13.525 "bdev_nvme_disable_controller", 00:06:13.525 "bdev_nvme_enable_controller", 00:06:13.525 "bdev_nvme_reset_controller", 00:06:13.525 "bdev_nvme_get_transport_statistics", 00:06:13.525 "bdev_nvme_apply_firmware", 00:06:13.525 "bdev_nvme_detach_controller", 00:06:13.525 "bdev_nvme_get_controllers", 00:06:13.525 "bdev_nvme_attach_controller", 00:06:13.525 "bdev_nvme_set_hotplug", 00:06:13.525 "bdev_nvme_set_options", 00:06:13.525 "bdev_passthru_delete", 00:06:13.525 "bdev_passthru_create", 00:06:13.525 "bdev_lvol_set_parent_bdev", 00:06:13.525 "bdev_lvol_set_parent", 00:06:13.525 "bdev_lvol_check_shallow_copy", 00:06:13.525 "bdev_lvol_start_shallow_copy", 00:06:13.525 "bdev_lvol_grow_lvstore", 00:06:13.525 "bdev_lvol_get_lvols", 00:06:13.525 "bdev_lvol_get_lvstores", 00:06:13.525 "bdev_lvol_delete", 00:06:13.525 "bdev_lvol_set_read_only", 00:06:13.525 "bdev_lvol_resize", 00:06:13.525 "bdev_lvol_decouple_parent", 00:06:13.525 "bdev_lvol_inflate", 00:06:13.525 "bdev_lvol_rename", 00:06:13.525 "bdev_lvol_clone_bdev", 00:06:13.525 "bdev_lvol_clone", 00:06:13.525 "bdev_lvol_snapshot", 00:06:13.525 "bdev_lvol_create", 00:06:13.525 "bdev_lvol_delete_lvstore", 00:06:13.525 "bdev_lvol_rename_lvstore", 00:06:13.525 "bdev_lvol_create_lvstore", 00:06:13.525 "bdev_raid_set_options", 00:06:13.525 "bdev_raid_remove_base_bdev", 00:06:13.525 "bdev_raid_add_base_bdev", 00:06:13.525 "bdev_raid_delete", 00:06:13.525 "bdev_raid_create", 00:06:13.525 "bdev_raid_get_bdevs", 00:06:13.525 "bdev_error_inject_error", 00:06:13.525 "bdev_error_delete", 00:06:13.525 "bdev_error_create", 00:06:13.525 "bdev_split_delete", 00:06:13.525 "bdev_split_create", 00:06:13.525 "bdev_delay_delete", 00:06:13.525 "bdev_delay_create", 00:06:13.525 "bdev_delay_update_latency", 00:06:13.525 "bdev_zone_block_delete", 00:06:13.525 "bdev_zone_block_create", 00:06:13.525 "blobfs_create", 00:06:13.525 "blobfs_detect", 00:06:13.525 "blobfs_set_cache_size", 00:06:13.525 "bdev_aio_delete", 00:06:13.525 "bdev_aio_rescan", 00:06:13.525 "bdev_aio_create", 00:06:13.525 "bdev_ftl_set_property", 00:06:13.525 "bdev_ftl_get_properties", 00:06:13.526 "bdev_ftl_get_stats", 00:06:13.526 "bdev_ftl_unmap", 00:06:13.526 "bdev_ftl_unload", 00:06:13.526 "bdev_ftl_delete", 00:06:13.526 "bdev_ftl_load", 00:06:13.526 "bdev_ftl_create", 00:06:13.526 "bdev_virtio_attach_controller", 00:06:13.526 "bdev_virtio_scsi_get_devices", 00:06:13.526 "bdev_virtio_detach_controller", 00:06:13.526 "bdev_virtio_blk_set_hotplug", 00:06:13.526 "bdev_iscsi_delete", 00:06:13.526 "bdev_iscsi_create", 00:06:13.526 "bdev_iscsi_set_options", 00:06:13.526 "accel_error_inject_error", 00:06:13.526 "ioat_scan_accel_module", 00:06:13.526 "dsa_scan_accel_module", 00:06:13.526 "iaa_scan_accel_module", 00:06:13.526 "vfu_virtio_create_fs_endpoint", 00:06:13.526 "vfu_virtio_create_scsi_endpoint", 00:06:13.526 "vfu_virtio_scsi_remove_target", 00:06:13.526 "vfu_virtio_scsi_add_target", 00:06:13.526 "vfu_virtio_create_blk_endpoint", 00:06:13.526 "vfu_virtio_delete_endpoint", 00:06:13.526 "keyring_file_remove_key", 00:06:13.526 "keyring_file_add_key", 00:06:13.526 "keyring_linux_set_options", 00:06:13.526 "fsdev_aio_delete", 00:06:13.526 "fsdev_aio_create", 00:06:13.526 "iscsi_get_histogram", 00:06:13.526 "iscsi_enable_histogram", 00:06:13.526 "iscsi_set_options", 00:06:13.526 "iscsi_get_auth_groups", 00:06:13.526 "iscsi_auth_group_remove_secret", 00:06:13.526 "iscsi_auth_group_add_secret", 00:06:13.526 "iscsi_delete_auth_group", 00:06:13.526 "iscsi_create_auth_group", 00:06:13.526 "iscsi_set_discovery_auth", 00:06:13.526 "iscsi_get_options", 00:06:13.526 "iscsi_target_node_request_logout", 00:06:13.526 "iscsi_target_node_set_redirect", 00:06:13.526 "iscsi_target_node_set_auth", 00:06:13.526 "iscsi_target_node_add_lun", 00:06:13.526 "iscsi_get_stats", 00:06:13.526 "iscsi_get_connections", 00:06:13.526 "iscsi_portal_group_set_auth", 00:06:13.526 "iscsi_start_portal_group", 00:06:13.526 "iscsi_delete_portal_group", 00:06:13.526 "iscsi_create_portal_group", 00:06:13.526 "iscsi_get_portal_groups", 00:06:13.526 "iscsi_delete_target_node", 00:06:13.526 "iscsi_target_node_remove_pg_ig_maps", 00:06:13.526 "iscsi_target_node_add_pg_ig_maps", 00:06:13.526 "iscsi_create_target_node", 00:06:13.526 "iscsi_get_target_nodes", 00:06:13.526 "iscsi_delete_initiator_group", 00:06:13.526 "iscsi_initiator_group_remove_initiators", 00:06:13.526 "iscsi_initiator_group_add_initiators", 00:06:13.526 "iscsi_create_initiator_group", 00:06:13.526 "iscsi_get_initiator_groups", 00:06:13.526 "nvmf_set_crdt", 00:06:13.526 "nvmf_set_config", 00:06:13.526 "nvmf_set_max_subsystems", 00:06:13.526 "nvmf_stop_mdns_prr", 00:06:13.526 "nvmf_publish_mdns_prr", 00:06:13.526 "nvmf_subsystem_get_listeners", 00:06:13.526 "nvmf_subsystem_get_qpairs", 00:06:13.526 "nvmf_subsystem_get_controllers", 00:06:13.526 "nvmf_get_stats", 00:06:13.526 "nvmf_get_transports", 00:06:13.526 "nvmf_create_transport", 00:06:13.526 "nvmf_get_targets", 00:06:13.526 "nvmf_delete_target", 00:06:13.526 "nvmf_create_target", 00:06:13.526 "nvmf_subsystem_allow_any_host", 00:06:13.526 "nvmf_subsystem_set_keys", 00:06:13.526 "nvmf_subsystem_remove_host", 00:06:13.526 "nvmf_subsystem_add_host", 00:06:13.526 "nvmf_ns_remove_host", 00:06:13.526 "nvmf_ns_add_host", 00:06:13.526 "nvmf_subsystem_remove_ns", 00:06:13.526 "nvmf_subsystem_set_ns_ana_group", 00:06:13.526 "nvmf_subsystem_add_ns", 00:06:13.526 "nvmf_subsystem_listener_set_ana_state", 00:06:13.526 "nvmf_discovery_get_referrals", 00:06:13.526 "nvmf_discovery_remove_referral", 00:06:13.526 "nvmf_discovery_add_referral", 00:06:13.526 "nvmf_subsystem_remove_listener", 00:06:13.526 "nvmf_subsystem_add_listener", 00:06:13.526 "nvmf_delete_subsystem", 00:06:13.526 "nvmf_create_subsystem", 00:06:13.526 "nvmf_get_subsystems", 00:06:13.526 "env_dpdk_get_mem_stats", 00:06:13.526 "nbd_get_disks", 00:06:13.526 "nbd_stop_disk", 00:06:13.526 "nbd_start_disk", 00:06:13.526 "ublk_recover_disk", 00:06:13.526 "ublk_get_disks", 00:06:13.526 "ublk_stop_disk", 00:06:13.526 "ublk_start_disk", 00:06:13.526 "ublk_destroy_target", 00:06:13.526 "ublk_create_target", 00:06:13.526 "virtio_blk_create_transport", 00:06:13.526 "virtio_blk_get_transports", 00:06:13.526 "vhost_controller_set_coalescing", 00:06:13.526 "vhost_get_controllers", 00:06:13.526 "vhost_delete_controller", 00:06:13.526 "vhost_create_blk_controller", 00:06:13.526 "vhost_scsi_controller_remove_target", 00:06:13.526 "vhost_scsi_controller_add_target", 00:06:13.526 "vhost_start_scsi_controller", 00:06:13.526 "vhost_create_scsi_controller", 00:06:13.526 "thread_set_cpumask", 00:06:13.526 "scheduler_set_options", 00:06:13.526 "framework_get_governor", 00:06:13.526 "framework_get_scheduler", 00:06:13.526 "framework_set_scheduler", 00:06:13.526 "framework_get_reactors", 00:06:13.526 "thread_get_io_channels", 00:06:13.526 "thread_get_pollers", 00:06:13.526 "thread_get_stats", 00:06:13.526 "framework_monitor_context_switch", 00:06:13.526 "spdk_kill_instance", 00:06:13.526 "log_enable_timestamps", 00:06:13.526 "log_get_flags", 00:06:13.526 "log_clear_flag", 00:06:13.526 "log_set_flag", 00:06:13.526 "log_get_level", 00:06:13.526 "log_set_level", 00:06:13.526 "log_get_print_level", 00:06:13.526 "log_set_print_level", 00:06:13.526 "framework_enable_cpumask_locks", 00:06:13.526 "framework_disable_cpumask_locks", 00:06:13.526 "framework_wait_init", 00:06:13.526 "framework_start_init", 00:06:13.526 "scsi_get_devices", 00:06:13.526 "bdev_get_histogram", 00:06:13.526 "bdev_enable_histogram", 00:06:13.526 "bdev_set_qos_limit", 00:06:13.526 "bdev_set_qd_sampling_period", 00:06:13.526 "bdev_get_bdevs", 00:06:13.526 "bdev_reset_iostat", 00:06:13.526 "bdev_get_iostat", 00:06:13.526 "bdev_examine", 00:06:13.526 "bdev_wait_for_examine", 00:06:13.526 "bdev_set_options", 00:06:13.526 "accel_get_stats", 00:06:13.526 "accel_set_options", 00:06:13.526 "accel_set_driver", 00:06:13.526 "accel_crypto_key_destroy", 00:06:13.526 "accel_crypto_keys_get", 00:06:13.526 "accel_crypto_key_create", 00:06:13.526 "accel_assign_opc", 00:06:13.526 "accel_get_module_info", 00:06:13.526 "accel_get_opc_assignments", 00:06:13.526 "vmd_rescan", 00:06:13.526 "vmd_remove_device", 00:06:13.526 "vmd_enable", 00:06:13.526 "sock_get_default_impl", 00:06:13.526 "sock_set_default_impl", 00:06:13.526 "sock_impl_set_options", 00:06:13.526 "sock_impl_get_options", 00:06:13.526 "iobuf_get_stats", 00:06:13.526 "iobuf_set_options", 00:06:13.526 "keyring_get_keys", 00:06:13.526 "vfu_tgt_set_base_path", 00:06:13.526 "framework_get_pci_devices", 00:06:13.526 "framework_get_config", 00:06:13.526 "framework_get_subsystems", 00:06:13.526 "fsdev_set_opts", 00:06:13.526 "fsdev_get_opts", 00:06:13.526 "trace_get_info", 00:06:13.526 "trace_get_tpoint_group_mask", 00:06:13.526 "trace_disable_tpoint_group", 00:06:13.526 "trace_enable_tpoint_group", 00:06:13.526 "trace_clear_tpoint_mask", 00:06:13.526 "trace_set_tpoint_mask", 00:06:13.526 "notify_get_notifications", 00:06:13.526 "notify_get_types", 00:06:13.526 "spdk_get_version", 00:06:13.526 "rpc_get_methods" 00:06:13.526 ] 00:06:13.526 16:33:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:13.526 16:33:02 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.526 16:33:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.526 16:33:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:13.526 16:33:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1967587 00:06:13.526 16:33:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1967587 ']' 00:06:13.526 16:33:02 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1967587 00:06:13.526 16:33:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:13.526 16:33:02 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.526 16:33:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967587 00:06:13.785 16:33:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.785 16:33:02 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.786 16:33:02 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967587' 00:06:13.786 killing process with pid 1967587 00:06:13.786 16:33:02 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1967587 00:06:13.786 16:33:02 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1967587 00:06:13.786 00:06:13.786 real 0m0.822s 00:06:13.786 user 0m1.370s 00:06:13.786 sys 0m0.351s 00:06:13.786 16:33:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.786 16:33:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.786 ************************************ 00:06:13.786 END TEST spdkcli_tcp 00:06:13.786 ************************************ 00:06:13.786 16:33:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.786 16:33:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.786 16:33:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.786 16:33:02 -- common/autotest_common.sh@10 -- # set +x 00:06:13.786 ************************************ 00:06:13.786 START TEST dpdk_mem_utility 00:06:13.786 ************************************ 00:06:13.786 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.045 * Looking for test storage... 00:06:14.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.045 16:33:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.045 --rc genhtml_branch_coverage=1 00:06:14.045 --rc genhtml_function_coverage=1 00:06:14.045 --rc genhtml_legend=1 00:06:14.045 --rc geninfo_all_blocks=1 00:06:14.045 --rc geninfo_unexecuted_blocks=1 00:06:14.045 00:06:14.045 ' 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.045 --rc genhtml_branch_coverage=1 00:06:14.045 --rc genhtml_function_coverage=1 00:06:14.045 --rc genhtml_legend=1 00:06:14.045 --rc geninfo_all_blocks=1 00:06:14.045 --rc geninfo_unexecuted_blocks=1 00:06:14.045 00:06:14.045 ' 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.045 --rc genhtml_branch_coverage=1 00:06:14.045 --rc genhtml_function_coverage=1 00:06:14.045 --rc genhtml_legend=1 00:06:14.045 --rc geninfo_all_blocks=1 00:06:14.045 --rc geninfo_unexecuted_blocks=1 00:06:14.045 00:06:14.045 ' 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.045 --rc genhtml_branch_coverage=1 00:06:14.045 --rc genhtml_function_coverage=1 00:06:14.045 --rc genhtml_legend=1 00:06:14.045 --rc geninfo_all_blocks=1 00:06:14.045 --rc geninfo_unexecuted_blocks=1 00:06:14.045 00:06:14.045 ' 00:06:14.045 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.045 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1967991 00:06:14.045 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1967991 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1967991 ']' 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.045 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.045 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.045 [2024-12-06 16:33:02.623885] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:14.045 [2024-12-06 16:33:02.623947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967991 ] 00:06:14.045 [2024-12-06 16:33:02.691731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.045 [2024-12-06 16:33:02.712113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.306 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.306 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:14.306 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.306 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.306 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:14.306 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.306 { 00:06:14.306 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.306 } 00:06:14.306 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:14.306 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.306 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:14.306 1 heaps totaling size 818.000000 MiB 00:06:14.306 size: 818.000000 MiB heap id: 0 00:06:14.306 end heaps---------- 00:06:14.306 9 mempools totaling size 603.782043 MiB 00:06:14.306 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.306 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.306 size: 100.555481 MiB name: bdev_io_1967991 00:06:14.306 size: 50.003479 MiB name: msgpool_1967991 00:06:14.306 size: 36.509338 MiB name: fsdev_io_1967991 00:06:14.306 size: 21.763794 MiB name: PDU_Pool 00:06:14.306 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.306 size: 4.133484 MiB name: evtpool_1967991 00:06:14.306 size: 0.026123 MiB name: Session_Pool 00:06:14.306 end mempools------- 00:06:14.306 6 memzones totaling size 4.142822 MiB 00:06:14.306 size: 1.000366 MiB name: RG_ring_0_1967991 00:06:14.306 size: 1.000366 MiB name: RG_ring_1_1967991 00:06:14.306 size: 1.000366 MiB name: RG_ring_4_1967991 00:06:14.306 size: 1.000366 MiB name: RG_ring_5_1967991 00:06:14.306 size: 0.125366 MiB name: RG_ring_2_1967991 00:06:14.306 size: 0.015991 MiB name: RG_ring_3_1967991 00:06:14.306 end memzones------- 00:06:14.306 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.306 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:06:14.306 list of free elements. size: 10.852478 MiB 00:06:14.306 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:14.306 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:14.306 element at address: 0x200000400000 with size: 0.998535 MiB 00:06:14.306 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:14.306 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:14.306 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:14.306 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:14.306 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:14.306 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:06:14.306 element at address: 0x200000c00000 with size: 0.495422 MiB 00:06:14.306 element at address: 0x20000a600000 with size: 0.490723 MiB 00:06:14.306 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:14.306 element at address: 0x200003e00000 with size: 0.481934 MiB 00:06:14.306 element at address: 0x200028200000 with size: 0.410034 MiB 00:06:14.306 element at address: 0x200000800000 with size: 0.355042 MiB 00:06:14.306 list of standard malloc elements. size: 199.218628 MiB 00:06:14.306 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:14.306 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:14.306 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:14.306 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:14.306 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:14.306 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:14.306 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:14.306 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:14.306 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:14.306 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000085b040 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000085f300 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:14.306 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200028268f80 with size: 0.000183 MiB 00:06:14.306 element at address: 0x200028269040 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:14.306 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:14.306 list of memzone associated elements. size: 607.928894 MiB 00:06:14.306 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:14.306 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.306 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:14.306 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.306 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:14.307 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1967991_0 00:06:14.307 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:14.307 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1967991_0 00:06:14.307 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:14.307 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1967991_0 00:06:14.307 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:14.307 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.307 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:14.307 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.307 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:14.307 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1967991_0 00:06:14.307 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:14.307 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1967991 00:06:14.307 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:14.307 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1967991 00:06:14.307 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:14.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.307 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:14.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.307 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:14.307 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.307 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:14.307 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.307 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:14.307 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1967991 00:06:14.307 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:14.307 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1967991 00:06:14.307 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:14.307 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1967991 00:06:14.307 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:14.307 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1967991 00:06:14.307 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:14.307 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1967991 00:06:14.307 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:14.307 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1967991 00:06:14.307 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:14.307 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.307 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:14.307 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.307 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:14.307 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.307 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:14.307 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1967991 00:06:14.307 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:06:14.307 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1967991 00:06:14.307 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:14.307 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.307 element at address: 0x200028269100 with size: 0.023743 MiB 00:06:14.307 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.307 element at address: 0x20000085b100 with size: 0.016113 MiB 00:06:14.307 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1967991 00:06:14.307 element at address: 0x20002826f240 with size: 0.002441 MiB 00:06:14.307 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.307 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:06:14.307 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1967991 00:06:14.307 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:14.307 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1967991 00:06:14.307 element at address: 0x20000085af00 with size: 0.000305 MiB 00:06:14.307 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1967991 00:06:14.307 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:06:14.307 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.307 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.307 16:33:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1967991 00:06:14.307 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1967991 ']' 00:06:14.307 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1967991 00:06:14.307 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:14.307 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.307 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1967991 00:06:14.307 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.567 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.567 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1967991' 00:06:14.567 killing process with pid 1967991 00:06:14.567 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1967991 00:06:14.567 16:33:02 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1967991 00:06:14.567 00:06:14.567 real 0m0.706s 00:06:14.567 user 0m0.691s 00:06:14.567 sys 0m0.294s 00:06:14.567 16:33:03 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.567 16:33:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.567 ************************************ 00:06:14.567 END TEST dpdk_mem_utility 00:06:14.567 ************************************ 00:06:14.567 16:33:03 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:14.567 16:33:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.567 16:33:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.567 16:33:03 -- common/autotest_common.sh@10 -- # set +x 00:06:14.567 ************************************ 00:06:14.567 START TEST event 00:06:14.567 ************************************ 00:06:14.567 16:33:03 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:14.826 * Looking for test storage... 00:06:14.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.826 16:33:03 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.826 16:33:03 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.826 16:33:03 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.826 16:33:03 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.826 16:33:03 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.826 16:33:03 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.826 16:33:03 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.826 16:33:03 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.826 16:33:03 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.826 16:33:03 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.826 16:33:03 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.826 16:33:03 event -- scripts/common.sh@344 -- # case "$op" in 00:06:14.826 16:33:03 event -- scripts/common.sh@345 -- # : 1 00:06:14.826 16:33:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.826 16:33:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.826 16:33:03 event -- scripts/common.sh@365 -- # decimal 1 00:06:14.826 16:33:03 event -- scripts/common.sh@353 -- # local d=1 00:06:14.826 16:33:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.826 16:33:03 event -- scripts/common.sh@355 -- # echo 1 00:06:14.826 16:33:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.826 16:33:03 event -- scripts/common.sh@366 -- # decimal 2 00:06:14.826 16:33:03 event -- scripts/common.sh@353 -- # local d=2 00:06:14.826 16:33:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.826 16:33:03 event -- scripts/common.sh@355 -- # echo 2 00:06:14.826 16:33:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.826 16:33:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.826 16:33:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.826 16:33:03 event -- scripts/common.sh@368 -- # return 0 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.826 --rc genhtml_branch_coverage=1 00:06:14.826 --rc genhtml_function_coverage=1 00:06:14.826 --rc genhtml_legend=1 00:06:14.826 --rc geninfo_all_blocks=1 00:06:14.826 --rc geninfo_unexecuted_blocks=1 00:06:14.826 00:06:14.826 ' 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.826 --rc genhtml_branch_coverage=1 00:06:14.826 --rc genhtml_function_coverage=1 00:06:14.826 --rc genhtml_legend=1 00:06:14.826 --rc geninfo_all_blocks=1 00:06:14.826 --rc geninfo_unexecuted_blocks=1 00:06:14.826 00:06:14.826 ' 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.826 --rc genhtml_branch_coverage=1 00:06:14.826 --rc genhtml_function_coverage=1 00:06:14.826 --rc genhtml_legend=1 00:06:14.826 --rc geninfo_all_blocks=1 00:06:14.826 --rc geninfo_unexecuted_blocks=1 00:06:14.826 00:06:14.826 ' 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.826 --rc genhtml_branch_coverage=1 00:06:14.826 --rc genhtml_function_coverage=1 00:06:14.826 --rc genhtml_legend=1 00:06:14.826 --rc geninfo_all_blocks=1 00:06:14.826 --rc geninfo_unexecuted_blocks=1 00:06:14.826 00:06:14.826 ' 00:06:14.826 16:33:03 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:14.826 16:33:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:14.826 16:33:03 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:14.826 16:33:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.826 16:33:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.826 ************************************ 00:06:14.826 START TEST event_perf 00:06:14.827 ************************************ 00:06:14.827 16:33:03 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.827 Running I/O for 1 seconds...[2024-12-06 16:33:03.375630] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:14.827 [2024-12-06 16:33:03.375686] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968124 ] 00:06:14.827 [2024-12-06 16:33:03.446439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.827 [2024-12-06 16:33:03.471094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.827 [2024-12-06 16:33:03.471252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.827 [2024-12-06 16:33:03.471506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.827 [2024-12-06 16:33:03.471506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.206 Running I/O for 1 seconds... 00:06:16.206 lcore 0: 183103 00:06:16.206 lcore 1: 183106 00:06:16.206 lcore 2: 183104 00:06:16.206 lcore 3: 183100 00:06:16.206 done. 00:06:16.206 00:06:16.206 real 0m1.129s 00:06:16.206 user 0m4.053s 00:06:16.206 sys 0m0.074s 00:06:16.206 16:33:04 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.206 16:33:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.206 ************************************ 00:06:16.206 END TEST event_perf 00:06:16.206 ************************************ 00:06:16.206 16:33:04 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.206 16:33:04 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:16.206 16:33:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.206 16:33:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.206 ************************************ 00:06:16.206 START TEST event_reactor 00:06:16.206 ************************************ 00:06:16.206 16:33:04 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.206 [2024-12-06 16:33:04.550731] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:16.206 [2024-12-06 16:33:04.550768] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968432 ] 00:06:16.206 [2024-12-06 16:33:04.607142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.206 [2024-12-06 16:33:04.622659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.143 test_start 00:06:17.143 oneshot 00:06:17.143 tick 100 00:06:17.143 tick 100 00:06:17.143 tick 250 00:06:17.143 tick 100 00:06:17.143 tick 100 00:06:17.143 tick 100 00:06:17.143 tick 250 00:06:17.143 tick 500 00:06:17.143 tick 100 00:06:17.143 tick 100 00:06:17.143 tick 250 00:06:17.143 tick 100 00:06:17.143 tick 100 00:06:17.143 test_end 00:06:17.143 00:06:17.143 real 0m1.100s 00:06:17.143 user 0m1.046s 00:06:17.143 sys 0m0.050s 00:06:17.143 16:33:05 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.143 16:33:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:17.143 ************************************ 00:06:17.143 END TEST event_reactor 00:06:17.143 ************************************ 00:06:17.143 16:33:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.143 16:33:05 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:17.143 16:33:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.143 16:33:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.143 ************************************ 00:06:17.143 START TEST event_reactor_perf 00:06:17.143 ************************************ 00:06:17.143 16:33:05 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.143 [2024-12-06 16:33:05.699033] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:17.143 [2024-12-06 16:33:05.699079] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968788 ] 00:06:17.143 [2024-12-06 16:33:05.761544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.143 [2024-12-06 16:33:05.777625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.522 test_start 00:06:18.522 test_end 00:06:18.522 Performance: 536714 events per second 00:06:18.522 00:06:18.522 real 0m1.108s 00:06:18.522 user 0m1.049s 00:06:18.522 sys 0m0.055s 00:06:18.522 16:33:06 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.522 16:33:06 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.522 ************************************ 00:06:18.522 END TEST event_reactor_perf 00:06:18.522 ************************************ 00:06:18.522 16:33:06 event -- event/event.sh@49 -- # uname -s 00:06:18.522 16:33:06 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:18.522 16:33:06 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:18.522 16:33:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.522 16:33:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.522 16:33:06 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.522 ************************************ 00:06:18.522 START TEST event_scheduler 00:06:18.522 ************************************ 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:18.522 * Looking for test storage... 00:06:18.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.522 16:33:06 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.522 --rc genhtml_branch_coverage=1 00:06:18.522 --rc genhtml_function_coverage=1 00:06:18.522 --rc genhtml_legend=1 00:06:18.522 --rc geninfo_all_blocks=1 00:06:18.522 --rc geninfo_unexecuted_blocks=1 00:06:18.522 00:06:18.522 ' 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.522 --rc genhtml_branch_coverage=1 00:06:18.522 --rc genhtml_function_coverage=1 00:06:18.522 --rc genhtml_legend=1 00:06:18.522 --rc geninfo_all_blocks=1 00:06:18.522 --rc geninfo_unexecuted_blocks=1 00:06:18.522 00:06:18.522 ' 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.522 --rc genhtml_branch_coverage=1 00:06:18.522 --rc genhtml_function_coverage=1 00:06:18.522 --rc genhtml_legend=1 00:06:18.522 --rc geninfo_all_blocks=1 00:06:18.522 --rc geninfo_unexecuted_blocks=1 00:06:18.522 00:06:18.522 ' 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.522 --rc genhtml_branch_coverage=1 00:06:18.522 --rc genhtml_function_coverage=1 00:06:18.522 --rc genhtml_legend=1 00:06:18.522 --rc geninfo_all_blocks=1 00:06:18.522 --rc geninfo_unexecuted_blocks=1 00:06:18.522 00:06:18.522 ' 00:06:18.522 16:33:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:18.522 16:33:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1969170 00:06:18.522 16:33:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.522 16:33:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1969170 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1969170 ']' 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.522 16:33:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.522 16:33:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.522 [2024-12-06 16:33:07.000556] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:18.522 [2024-12-06 16:33:07.000617] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969170 ] 00:06:18.522 [2024-12-06 16:33:07.083793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:18.522 [2024-12-06 16:33:07.116050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.522 [2024-12-06 16:33:07.116217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.522 [2024-12-06 16:33:07.116559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.522 [2024-12-06 16:33:07.116562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:19.461 16:33:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 [2024-12-06 16:33:07.794945] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:19.461 [2024-12-06 16:33:07.794961] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:19.461 [2024-12-06 16:33:07.794968] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:19.461 [2024-12-06 16:33:07.794972] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:19.461 [2024-12-06 16:33:07.794976] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 [2024-12-06 16:33:07.846433] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 ************************************ 00:06:19.461 START TEST scheduler_create_thread 00:06:19.461 ************************************ 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 2 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 3 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 4 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 5 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 6 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 7 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 8 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 9 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 10 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.461 16:33:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.031 16:33:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.031 00:06:20.031 real 0m0.592s 00:06:20.031 user 0m0.014s 00:06:20.031 sys 0m0.001s 00:06:20.031 16:33:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.031 16:33:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.031 ************************************ 00:06:20.031 END TEST scheduler_create_thread 00:06:20.031 ************************************ 00:06:20.031 16:33:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:20.031 16:33:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1969170 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1969170 ']' 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1969170 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969170 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969170' 00:06:20.031 killing process with pid 1969170 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1969170 00:06:20.031 16:33:08 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1969170 00:06:20.291 [2024-12-06 16:33:08.950061] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:20.551 00:06:20.551 real 0m2.193s 00:06:20.551 user 0m4.478s 00:06:20.551 sys 0m0.313s 00:06:20.551 16:33:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.551 16:33:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 ************************************ 00:06:20.551 END TEST event_scheduler 00:06:20.551 ************************************ 00:06:20.551 16:33:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:20.551 16:33:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:20.551 16:33:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.551 16:33:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.551 16:33:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 ************************************ 00:06:20.551 START TEST app_repeat 00:06:20.551 ************************************ 00:06:20.551 16:33:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1969807 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1969807' 00:06:20.551 Process app_repeat pid: 1969807 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:20.551 spdk_app_start Round 0 00:06:20.551 16:33:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1969807 /var/tmp/spdk-nbd.sock 00:06:20.551 16:33:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969807 ']' 00:06:20.551 16:33:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.551 16:33:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.551 16:33:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.551 16:33:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.551 16:33:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 [2024-12-06 16:33:09.108053] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:20.551 [2024-12-06 16:33:09.108098] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969807 ] 00:06:20.551 [2024-12-06 16:33:09.172237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.551 [2024-12-06 16:33:09.190567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.551 [2024-12-06 16:33:09.190570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.811 16:33:09 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.811 16:33:09 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:20.811 16:33:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.811 Malloc0 00:06:20.811 16:33:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.071 Malloc1 00:06:21.071 16:33:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.071 16:33:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:21.330 /dev/nbd0 00:06:21.330 16:33:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.330 16:33:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.330 1+0 records in 00:06:21.330 1+0 records out 00:06:21.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199145 s, 20.6 MB/s 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.330 16:33:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.330 16:33:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.330 16:33:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.330 /dev/nbd1 00:06:21.330 16:33:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.330 16:33:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.330 1+0 records in 00:06:21.330 1+0 records out 00:06:21.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212963 s, 19.2 MB/s 00:06:21.330 16:33:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.330 16:33:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:21.330 16:33:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.330 16:33:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.330 16:33:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:21.330 16:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.330 16:33:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.330 16:33:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.330 16:33:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.330 16:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.590 { 00:06:21.590 "nbd_device": "/dev/nbd0", 00:06:21.590 "bdev_name": "Malloc0" 00:06:21.590 }, 00:06:21.590 { 00:06:21.590 "nbd_device": "/dev/nbd1", 00:06:21.590 "bdev_name": "Malloc1" 00:06:21.590 } 00:06:21.590 ]' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.590 { 00:06:21.590 "nbd_device": "/dev/nbd0", 00:06:21.590 "bdev_name": "Malloc0" 00:06:21.590 }, 00:06:21.590 { 00:06:21.590 "nbd_device": "/dev/nbd1", 00:06:21.590 "bdev_name": "Malloc1" 00:06:21.590 } 00:06:21.590 ]' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.590 /dev/nbd1' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.590 /dev/nbd1' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.590 256+0 records in 00:06:21.590 256+0 records out 00:06:21.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00447865 s, 234 MB/s 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.590 256+0 records in 00:06:21.590 256+0 records out 00:06:21.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149084 s, 70.3 MB/s 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.590 256+0 records in 00:06:21.590 256+0 records out 00:06:21.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129391 s, 81.0 MB/s 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.590 16:33:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.849 16:33:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.107 16:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.366 16:33:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.366 16:33:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.366 16:33:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.624 [2024-12-06 16:33:11.063038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.624 [2024-12-06 16:33:11.079206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.624 [2024-12-06 16:33:11.079346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.624 [2024-12-06 16:33:11.108438] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.624 [2024-12-06 16:33:11.108472] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.909 16:33:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:25.909 16:33:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:25.909 spdk_app_start Round 1 00:06:25.909 16:33:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1969807 /var/tmp/spdk-nbd.sock 00:06:25.909 16:33:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969807 ']' 00:06:25.909 16:33:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.909 16:33:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.909 16:33:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.909 16:33:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.909 16:33:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.909 16:33:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.909 16:33:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:25.909 16:33:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.909 Malloc0 00:06:25.909 16:33:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.909 Malloc1 00:06:25.909 16:33:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.909 16:33:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.168 /dev/nbd0 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.168 1+0 records in 00:06:26.168 1+0 records out 00:06:26.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205089 s, 20.0 MB/s 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.168 /dev/nbd1 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.168 1+0 records in 00:06:26.168 1+0 records out 00:06:26.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9025e-05 s, 46.0 MB/s 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:26.168 16:33:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.168 16:33:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.427 16:33:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.427 { 00:06:26.427 "nbd_device": "/dev/nbd0", 00:06:26.427 "bdev_name": "Malloc0" 00:06:26.427 }, 00:06:26.427 { 00:06:26.427 "nbd_device": "/dev/nbd1", 00:06:26.427 "bdev_name": "Malloc1" 00:06:26.427 } 00:06:26.427 ]' 00:06:26.427 16:33:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.427 { 00:06:26.427 "nbd_device": "/dev/nbd0", 00:06:26.427 "bdev_name": "Malloc0" 00:06:26.427 }, 00:06:26.427 { 00:06:26.427 "nbd_device": "/dev/nbd1", 00:06:26.427 "bdev_name": "Malloc1" 00:06:26.427 } 00:06:26.427 ]' 00:06:26.427 16:33:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.427 /dev/nbd1' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.427 /dev/nbd1' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.427 256+0 records in 00:06:26.427 256+0 records out 00:06:26.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431043 s, 243 MB/s 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.427 256+0 records in 00:06:26.427 256+0 records out 00:06:26.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117658 s, 89.1 MB/s 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.427 256+0 records in 00:06:26.427 256+0 records out 00:06:26.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123002 s, 85.2 MB/s 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.427 16:33:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.685 16:33:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.944 16:33:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.944 16:33:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.201 16:33:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.201 [2024-12-06 16:33:15.875869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.201 [2024-12-06 16:33:15.891745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.201 [2024-12-06 16:33:15.891748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.460 [2024-12-06 16:33:15.921424] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.460 [2024-12-06 16:33:15.921457] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.746 16:33:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.746 16:33:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:30.746 spdk_app_start Round 2 00:06:30.746 16:33:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1969807 /var/tmp/spdk-nbd.sock 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969807 ']' 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.746 16:33:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:30.746 16:33:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.746 Malloc0 00:06:30.746 16:33:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.746 Malloc1 00:06:30.746 16:33:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.746 16:33:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.005 /dev/nbd0 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.005 1+0 records in 00:06:31.005 1+0 records out 00:06:31.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284148 s, 14.4 MB/s 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.005 /dev/nbd1 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.005 1+0 records in 00:06:31.005 1+0 records out 00:06:31.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000109774 s, 37.3 MB/s 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:31.005 16:33:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.005 16:33:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.264 { 00:06:31.264 "nbd_device": "/dev/nbd0", 00:06:31.264 "bdev_name": "Malloc0" 00:06:31.264 }, 00:06:31.264 { 00:06:31.264 "nbd_device": "/dev/nbd1", 00:06:31.264 "bdev_name": "Malloc1" 00:06:31.264 } 00:06:31.264 ]' 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.264 { 00:06:31.264 "nbd_device": "/dev/nbd0", 00:06:31.264 "bdev_name": "Malloc0" 00:06:31.264 }, 00:06:31.264 { 00:06:31.264 "nbd_device": "/dev/nbd1", 00:06:31.264 "bdev_name": "Malloc1" 00:06:31.264 } 00:06:31.264 ]' 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.264 /dev/nbd1' 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.264 /dev/nbd1' 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.264 16:33:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.264 256+0 records in 00:06:31.264 256+0 records out 00:06:31.264 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461415 s, 227 MB/s 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.265 256+0 records in 00:06:31.265 256+0 records out 00:06:31.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118051 s, 88.8 MB/s 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.265 256+0 records in 00:06:31.265 256+0 records out 00:06:31.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122062 s, 85.9 MB/s 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.265 16:33:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.524 16:33:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.783 16:33:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:31.784 16:33:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:31.784 16:33:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.043 16:33:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.043 [2024-12-06 16:33:20.726220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.302 [2024-12-06 16:33:20.742381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.302 [2024-12-06 16:33:20.742483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.302 [2024-12-06 16:33:20.772141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.302 [2024-12-06 16:33:20.772173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:35.592 16:33:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1969807 /var/tmp/spdk-nbd.sock 00:06:35.592 16:33:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1969807 ']' 00:06:35.592 16:33:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.592 16:33:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:35.593 16:33:23 event.app_repeat -- event/event.sh@39 -- # killprocess 1969807 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1969807 ']' 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1969807 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1969807 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1969807' 00:06:35.593 killing process with pid 1969807 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1969807 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1969807 00:06:35.593 spdk_app_start is called in Round 0. 00:06:35.593 Shutdown signal received, stop current app iteration 00:06:35.593 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 reinitialization... 00:06:35.593 spdk_app_start is called in Round 1. 00:06:35.593 Shutdown signal received, stop current app iteration 00:06:35.593 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 reinitialization... 00:06:35.593 spdk_app_start is called in Round 2. 00:06:35.593 Shutdown signal received, stop current app iteration 00:06:35.593 Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 reinitialization... 00:06:35.593 spdk_app_start is called in Round 3. 00:06:35.593 Shutdown signal received, stop current app iteration 00:06:35.593 16:33:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.593 16:33:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.593 00:06:35.593 real 0m14.835s 00:06:35.593 user 0m32.366s 00:06:35.593 sys 0m1.882s 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.593 16:33:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.593 ************************************ 00:06:35.593 END TEST app_repeat 00:06:35.593 ************************************ 00:06:35.593 16:33:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.593 16:33:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:35.593 16:33:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.593 16:33:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.593 16:33:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.593 ************************************ 00:06:35.593 START TEST cpu_locks 00:06:35.593 ************************************ 00:06:35.593 16:33:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:35.593 * Looking for test storage... 00:06:35.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.593 16:33:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:35.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.593 --rc genhtml_branch_coverage=1 00:06:35.593 --rc genhtml_function_coverage=1 00:06:35.593 --rc genhtml_legend=1 00:06:35.593 --rc geninfo_all_blocks=1 00:06:35.593 --rc geninfo_unexecuted_blocks=1 00:06:35.593 00:06:35.593 ' 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:35.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.593 --rc genhtml_branch_coverage=1 00:06:35.593 --rc genhtml_function_coverage=1 00:06:35.593 --rc genhtml_legend=1 00:06:35.593 --rc geninfo_all_blocks=1 00:06:35.593 --rc geninfo_unexecuted_blocks=1 00:06:35.593 00:06:35.593 ' 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:35.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.593 --rc genhtml_branch_coverage=1 00:06:35.593 --rc genhtml_function_coverage=1 00:06:35.593 --rc genhtml_legend=1 00:06:35.593 --rc geninfo_all_blocks=1 00:06:35.593 --rc geninfo_unexecuted_blocks=1 00:06:35.593 00:06:35.593 ' 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:35.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.593 --rc genhtml_branch_coverage=1 00:06:35.593 --rc genhtml_function_coverage=1 00:06:35.593 --rc genhtml_legend=1 00:06:35.593 --rc geninfo_all_blocks=1 00:06:35.593 --rc geninfo_unexecuted_blocks=1 00:06:35.593 00:06:35.593 ' 00:06:35.593 16:33:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:35.593 16:33:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:35.593 16:33:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:35.593 16:33:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.593 16:33:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.593 ************************************ 00:06:35.593 START TEST default_locks 00:06:35.593 ************************************ 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1973591 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1973591 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1973591 ']' 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.593 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.593 [2024-12-06 16:33:24.148606] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:35.593 [2024-12-06 16:33:24.148654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973591 ] 00:06:35.593 [2024-12-06 16:33:24.211668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.593 [2024-12-06 16:33:24.228353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1973591 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1973591 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.852 lslocks: write error 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1973591 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1973591 ']' 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1973591 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.852 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973591 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973591' 00:06:36.111 killing process with pid 1973591 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1973591 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1973591 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1973591 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1973591 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1973591 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1973591 ']' 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.111 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1973591) - No such process 00:06:36.112 ERROR: process (pid: 1973591) is no longer running 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.112 00:06:36.112 real 0m0.651s 00:06:36.112 user 0m0.625s 00:06:36.112 sys 0m0.341s 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.112 16:33:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.112 ************************************ 00:06:36.112 END TEST default_locks 00:06:36.112 ************************************ 00:06:36.112 16:33:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:36.112 16:33:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.112 16:33:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.112 16:33:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.370 ************************************ 00:06:36.370 START TEST default_locks_via_rpc 00:06:36.370 ************************************ 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1973807 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1973807 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1973807 ']' 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.370 16:33:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.370 [2024-12-06 16:33:24.846777] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:36.370 [2024-12-06 16:33:24.846826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973807 ] 00:06:36.370 [2024-12-06 16:33:24.911167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.370 [2024-12-06 16:33:24.928736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1973807 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1973807 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1973807 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1973807 ']' 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1973807 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973807 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973807' 00:06:36.643 killing process with pid 1973807 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1973807 00:06:36.643 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1973807 00:06:36.901 00:06:36.901 real 0m0.642s 00:06:36.901 user 0m0.623s 00:06:36.901 sys 0m0.334s 00:06:36.901 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.901 16:33:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.901 ************************************ 00:06:36.901 END TEST default_locks_via_rpc 00:06:36.901 ************************************ 00:06:36.901 16:33:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:36.901 16:33:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.901 16:33:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.901 16:33:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.901 ************************************ 00:06:36.901 START TEST non_locking_app_on_locked_coremask 00:06:36.901 ************************************ 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1973984 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1973984 /var/tmp/spdk.sock 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1973984 ']' 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.901 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.901 [2024-12-06 16:33:25.521938] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:36.901 [2024-12-06 16:33:25.521971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973984 ] 00:06:36.901 [2024-12-06 16:33:25.578121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.159 [2024-12-06 16:33:25.594489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1973993 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1973993 /var/tmp/spdk2.sock 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1973993 ']' 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.159 16:33:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:37.159 [2024-12-06 16:33:25.775944] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:37.159 [2024-12-06 16:33:25.775997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1973993 ] 00:06:37.416 [2024-12-06 16:33:25.869491] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.416 [2024-12-06 16:33:25.869512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.416 [2024-12-06 16:33:25.902063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.981 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.981 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:37.981 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1973984 00:06:37.981 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.981 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1973984 00:06:38.240 lslocks: write error 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1973984 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1973984 ']' 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1973984 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973984 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973984' 00:06:38.240 killing process with pid 1973984 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1973984 00:06:38.240 16:33:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1973984 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1973993 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1973993 ']' 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1973993 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973993 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973993' 00:06:38.825 killing process with pid 1973993 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1973993 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1973993 00:06:38.825 00:06:38.825 real 0m1.951s 00:06:38.825 user 0m2.119s 00:06:38.825 sys 0m0.672s 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.825 16:33:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.825 ************************************ 00:06:38.825 END TEST non_locking_app_on_locked_coremask 00:06:38.825 ************************************ 00:06:38.825 16:33:27 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:38.825 16:33:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.825 16:33:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.825 16:33:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.825 ************************************ 00:06:38.825 START TEST locking_app_on_unlocked_coremask 00:06:38.825 ************************************ 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1974362 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1974362 /var/tmp/spdk.sock 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1974362 ']' 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.825 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.084 [2024-12-06 16:33:27.528036] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:39.084 [2024-12-06 16:33:27.528084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974362 ] 00:06:39.084 [2024-12-06 16:33:27.592251] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.084 [2024-12-06 16:33:27.592272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.084 [2024-12-06 16:33:27.608755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1974486 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1974486 /var/tmp/spdk2.sock 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1974486 ']' 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.084 16:33:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.343 [2024-12-06 16:33:27.798714] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:39.343 [2024-12-06 16:33:27.798769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974486 ] 00:06:39.343 [2024-12-06 16:33:27.892756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.343 [2024-12-06 16:33:27.925461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.911 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.911 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.911 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1974486 00:06:39.911 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.911 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1974486 00:06:40.170 lslocks: write error 00:06:40.170 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1974362 00:06:40.170 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1974362 ']' 00:06:40.170 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1974362 00:06:40.170 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.170 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.170 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974362 00:06:40.429 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.429 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.429 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974362' 00:06:40.429 killing process with pid 1974362 00:06:40.429 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1974362 00:06:40.429 16:33:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1974362 00:06:40.687 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1974486 00:06:40.687 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1974486 ']' 00:06:40.687 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1974486 00:06:40.687 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.688 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.688 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1974486 00:06:40.688 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.688 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.688 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1974486' 00:06:40.688 killing process with pid 1974486 00:06:40.688 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1974486 00:06:40.688 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1974486 00:06:40.946 00:06:40.946 real 0m1.959s 00:06:40.946 user 0m2.118s 00:06:40.946 sys 0m0.674s 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.946 ************************************ 00:06:40.946 END TEST locking_app_on_unlocked_coremask 00:06:40.946 ************************************ 00:06:40.946 16:33:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.946 16:33:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.946 16:33:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.946 16:33:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.946 ************************************ 00:06:40.946 START TEST locking_app_on_locked_coremask 00:06:40.946 ************************************ 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1975005 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1975005 /var/tmp/spdk.sock 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975005 ']' 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.946 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.946 [2024-12-06 16:33:29.537747] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:40.946 [2024-12-06 16:33:29.537799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975005 ] 00:06:40.946 [2024-12-06 16:33:29.604190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.946 [2024-12-06 16:33:29.620736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1975068 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1975068 /var/tmp/spdk2.sock 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1975068 /var/tmp/spdk2.sock 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1975068 /var/tmp/spdk2.sock 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975068 ']' 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.205 16:33:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.205 [2024-12-06 16:33:29.807189] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:41.205 [2024-12-06 16:33:29.807242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975068 ] 00:06:41.464 [2024-12-06 16:33:29.905706] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1975005 has claimed it. 00:06:41.464 [2024-12-06 16:33:29.905742] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1975068) - No such process 00:06:42.034 ERROR: process (pid: 1975068) is no longer running 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1975005 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1975005 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.034 lslocks: write error 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1975005 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1975005 ']' 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1975005 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975005 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975005' 00:06:42.034 killing process with pid 1975005 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1975005 00:06:42.034 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1975005 00:06:42.294 00:06:42.294 real 0m1.328s 00:06:42.294 user 0m1.448s 00:06:42.294 sys 0m0.419s 00:06:42.294 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.294 16:33:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.294 ************************************ 00:06:42.294 END TEST locking_app_on_locked_coremask 00:06:42.294 ************************************ 00:06:42.294 16:33:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.294 16:33:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.294 16:33:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.294 16:33:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.294 ************************************ 00:06:42.294 START TEST locking_overlapped_coremask 00:06:42.294 ************************************ 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1975284 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1975284 /var/tmp/spdk.sock 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975284 ']' 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.294 16:33:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.294 [2024-12-06 16:33:30.910654] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:42.294 [2024-12-06 16:33:30.910702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975284 ] 00:06:42.294 [2024-12-06 16:33:30.973570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.553 [2024-12-06 16:33:30.992063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.553 [2024-12-06 16:33:30.992217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.553 [2024-12-06 16:33:30.992305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1975432 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1975432 /var/tmp/spdk2.sock 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1975432 /var/tmp/spdk2.sock 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1975432 /var/tmp/spdk2.sock 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1975432 ']' 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.553 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.553 [2024-12-06 16:33:31.184889] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:42.553 [2024-12-06 16:33:31.184944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975432 ] 00:06:42.812 [2024-12-06 16:33:31.307871] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1975284 has claimed it. 00:06:42.812 [2024-12-06 16:33:31.307912] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:43.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1975432) - No such process 00:06:43.379 ERROR: process (pid: 1975432) is no longer running 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1975284 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1975284 ']' 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1975284 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975284 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975284' 00:06:43.379 killing process with pid 1975284 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1975284 00:06:43.379 16:33:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1975284 00:06:43.379 00:06:43.379 real 0m1.162s 00:06:43.379 user 0m3.268s 00:06:43.379 sys 0m0.301s 00:06:43.379 16:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.379 16:33:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.379 ************************************ 00:06:43.379 END TEST locking_overlapped_coremask 00:06:43.379 ************************************ 00:06:43.379 16:33:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:43.379 16:33:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.379 16:33:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.379 16:33:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.640 ************************************ 00:06:43.640 START TEST locking_overlapped_coremask_via_rpc 00:06:43.640 ************************************ 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1975478 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1975478 /var/tmp/spdk.sock 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975478 ']' 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.640 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.640 [2024-12-06 16:33:32.117148] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:43.640 [2024-12-06 16:33:32.117196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975478 ] 00:06:43.640 [2024-12-06 16:33:32.182205] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.640 [2024-12-06 16:33:32.182228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.640 [2024-12-06 16:33:32.201464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.640 [2024-12-06 16:33:32.201614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.640 [2024-12-06 16:33:32.201615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1975642 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1975642 /var/tmp/spdk2.sock 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975642 ']' 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.900 16:33:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.900 [2024-12-06 16:33:32.393124] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:43.900 [2024-12-06 16:33:32.393180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975642 ] 00:06:43.900 [2024-12-06 16:33:32.490573] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.900 [2024-12-06 16:33:32.490597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.900 [2024-12-06 16:33:32.524731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.900 [2024-12-06 16:33:32.528222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.900 [2024-12-06 16:33:32.528224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.840 [2024-12-06 16:33:33.188161] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1975478 has claimed it. 00:06:44.840 request: 00:06:44.840 { 00:06:44.840 "method": "framework_enable_cpumask_locks", 00:06:44.840 "req_id": 1 00:06:44.840 } 00:06:44.840 Got JSON-RPC error response 00:06:44.840 response: 00:06:44.840 { 00:06:44.840 "code": -32603, 00:06:44.840 "message": "Failed to claim CPU core: 2" 00:06:44.840 } 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1975478 /var/tmp/spdk.sock 00:06:44.840 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975478 ']' 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1975642 /var/tmp/spdk2.sock 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1975642 ']' 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.841 00:06:44.841 real 0m1.443s 00:06:44.841 user 0m0.652s 00:06:44.841 sys 0m0.101s 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.841 16:33:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.841 ************************************ 00:06:44.841 END TEST locking_overlapped_coremask_via_rpc 00:06:44.841 ************************************ 00:06:45.101 16:33:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:45.101 16:33:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1975478 ]] 00:06:45.101 16:33:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1975478 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975478 ']' 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975478 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975478 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975478' 00:06:45.101 killing process with pid 1975478 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1975478 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1975478 00:06:45.101 16:33:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1975642 ]] 00:06:45.101 16:33:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1975642 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975642 ']' 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975642 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.101 16:33:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1975642 00:06:45.360 16:33:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:45.360 16:33:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:45.360 16:33:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1975642' 00:06:45.360 killing process with pid 1975642 00:06:45.360 16:33:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1975642 00:06:45.360 16:33:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1975642 00:06:45.360 16:33:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.360 16:33:34 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.360 16:33:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1975478 ]] 00:06:45.360 16:33:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1975478 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975478 ']' 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975478 00:06:45.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1975478) - No such process 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1975478 is not found' 00:06:45.360 Process with pid 1975478 is not found 00:06:45.360 16:33:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1975642 ]] 00:06:45.360 16:33:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1975642 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1975642 ']' 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1975642 00:06:45.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1975642) - No such process 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1975642 is not found' 00:06:45.360 Process with pid 1975642 is not found 00:06:45.360 16:33:34 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.360 00:06:45.360 real 0m10.045s 00:06:45.360 user 0m18.692s 00:06:45.360 sys 0m3.555s 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.360 16:33:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.360 ************************************ 00:06:45.360 END TEST cpu_locks 00:06:45.360 ************************************ 00:06:45.360 00:06:45.360 real 0m30.824s 00:06:45.360 user 1m1.848s 00:06:45.360 sys 0m6.193s 00:06:45.360 16:33:34 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.360 16:33:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.360 ************************************ 00:06:45.360 END TEST event 00:06:45.360 ************************************ 00:06:45.618 16:33:34 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:45.618 16:33:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.618 16:33:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.618 16:33:34 -- common/autotest_common.sh@10 -- # set +x 00:06:45.618 ************************************ 00:06:45.618 START TEST thread 00:06:45.618 ************************************ 00:06:45.618 16:33:34 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:45.618 * Looking for test storage... 00:06:45.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:45.618 16:33:34 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:45.618 16:33:34 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:45.618 16:33:34 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:45.618 16:33:34 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:45.618 16:33:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.618 16:33:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.618 16:33:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.618 16:33:34 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.618 16:33:34 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.618 16:33:34 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.618 16:33:34 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.618 16:33:34 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.618 16:33:34 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.618 16:33:34 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.618 16:33:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.619 16:33:34 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:45.619 16:33:34 thread -- scripts/common.sh@345 -- # : 1 00:06:45.619 16:33:34 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.619 16:33:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.619 16:33:34 thread -- scripts/common.sh@365 -- # decimal 1 00:06:45.619 16:33:34 thread -- scripts/common.sh@353 -- # local d=1 00:06:45.619 16:33:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.619 16:33:34 thread -- scripts/common.sh@355 -- # echo 1 00:06:45.619 16:33:34 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.619 16:33:34 thread -- scripts/common.sh@366 -- # decimal 2 00:06:45.619 16:33:34 thread -- scripts/common.sh@353 -- # local d=2 00:06:45.619 16:33:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.619 16:33:34 thread -- scripts/common.sh@355 -- # echo 2 00:06:45.619 16:33:34 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.619 16:33:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.619 16:33:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.619 16:33:34 thread -- scripts/common.sh@368 -- # return 0 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.619 --rc genhtml_branch_coverage=1 00:06:45.619 --rc genhtml_function_coverage=1 00:06:45.619 --rc genhtml_legend=1 00:06:45.619 --rc geninfo_all_blocks=1 00:06:45.619 --rc geninfo_unexecuted_blocks=1 00:06:45.619 00:06:45.619 ' 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.619 --rc genhtml_branch_coverage=1 00:06:45.619 --rc genhtml_function_coverage=1 00:06:45.619 --rc genhtml_legend=1 00:06:45.619 --rc geninfo_all_blocks=1 00:06:45.619 --rc geninfo_unexecuted_blocks=1 00:06:45.619 00:06:45.619 ' 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.619 --rc genhtml_branch_coverage=1 00:06:45.619 --rc genhtml_function_coverage=1 00:06:45.619 --rc genhtml_legend=1 00:06:45.619 --rc geninfo_all_blocks=1 00:06:45.619 --rc geninfo_unexecuted_blocks=1 00:06:45.619 00:06:45.619 ' 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:45.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.619 --rc genhtml_branch_coverage=1 00:06:45.619 --rc genhtml_function_coverage=1 00:06:45.619 --rc genhtml_legend=1 00:06:45.619 --rc geninfo_all_blocks=1 00:06:45.619 --rc geninfo_unexecuted_blocks=1 00:06:45.619 00:06:45.619 ' 00:06:45.619 16:33:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.619 16:33:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.619 ************************************ 00:06:45.619 START TEST thread_poller_perf 00:06:45.619 ************************************ 00:06:45.619 16:33:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.619 [2024-12-06 16:33:34.244798] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:45.619 [2024-12-06 16:33:34.244842] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976244 ] 00:06:45.619 [2024-12-06 16:33:34.303021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.879 [2024-12-06 16:33:34.320210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.879 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.818 [2024-12-06T15:33:35.511Z] ====================================== 00:06:46.818 [2024-12-06T15:33:35.511Z] busy:2408254436 (cyc) 00:06:46.818 [2024-12-06T15:33:35.511Z] total_run_count: 417000 00:06:46.818 [2024-12-06T15:33:35.511Z] tsc_hz: 2400000000 (cyc) 00:06:46.818 [2024-12-06T15:33:35.511Z] ====================================== 00:06:46.818 [2024-12-06T15:33:35.511Z] poller_cost: 5775 (cyc), 2406 (nsec) 00:06:46.818 00:06:46.818 real 0m1.111s 00:06:46.818 user 0m1.055s 00:06:46.818 sys 0m0.053s 00:06:46.818 16:33:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.818 16:33:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.818 ************************************ 00:06:46.818 END TEST thread_poller_perf 00:06:46.818 ************************************ 00:06:46.818 16:33:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.818 16:33:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:46.818 16:33:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.818 16:33:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.818 ************************************ 00:06:46.818 START TEST thread_poller_perf 00:06:46.818 ************************************ 00:06:46.818 16:33:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.818 [2024-12-06 16:33:35.397344] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:46.818 [2024-12-06 16:33:35.397378] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976378 ] 00:06:46.818 [2024-12-06 16:33:35.451350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.818 [2024-12-06 16:33:35.466976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.818 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.197 [2024-12-06T15:33:36.890Z] ====================================== 00:06:48.197 [2024-12-06T15:33:36.890Z] busy:2401347884 (cyc) 00:06:48.197 [2024-12-06T15:33:36.890Z] total_run_count: 5110000 00:06:48.197 [2024-12-06T15:33:36.890Z] tsc_hz: 2400000000 (cyc) 00:06:48.197 [2024-12-06T15:33:36.890Z] ====================================== 00:06:48.197 [2024-12-06T15:33:36.890Z] poller_cost: 469 (cyc), 195 (nsec) 00:06:48.197 00:06:48.197 real 0m1.098s 00:06:48.197 user 0m1.053s 00:06:48.197 sys 0m0.042s 00:06:48.197 16:33:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.197 16:33:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.197 ************************************ 00:06:48.197 END TEST thread_poller_perf 00:06:48.197 ************************************ 00:06:48.197 16:33:36 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.197 00:06:48.197 real 0m2.428s 00:06:48.197 user 0m2.220s 00:06:48.197 sys 0m0.213s 00:06:48.197 16:33:36 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.197 16:33:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.197 ************************************ 00:06:48.197 END TEST thread 00:06:48.197 ************************************ 00:06:48.197 16:33:36 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:48.197 16:33:36 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.197 16:33:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.197 16:33:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.197 16:33:36 -- common/autotest_common.sh@10 -- # set +x 00:06:48.197 ************************************ 00:06:48.197 START TEST app_cmdline 00:06:48.197 ************************************ 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:48.197 * Looking for test storage... 00:06:48.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.197 16:33:36 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.197 --rc genhtml_branch_coverage=1 00:06:48.197 --rc genhtml_function_coverage=1 00:06:48.197 --rc genhtml_legend=1 00:06:48.197 --rc geninfo_all_blocks=1 00:06:48.197 --rc geninfo_unexecuted_blocks=1 00:06:48.197 00:06:48.197 ' 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.197 --rc genhtml_branch_coverage=1 00:06:48.197 --rc genhtml_function_coverage=1 00:06:48.197 --rc genhtml_legend=1 00:06:48.197 --rc geninfo_all_blocks=1 00:06:48.197 --rc geninfo_unexecuted_blocks=1 00:06:48.197 00:06:48.197 ' 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.197 --rc genhtml_branch_coverage=1 00:06:48.197 --rc genhtml_function_coverage=1 00:06:48.197 --rc genhtml_legend=1 00:06:48.197 --rc geninfo_all_blocks=1 00:06:48.197 --rc geninfo_unexecuted_blocks=1 00:06:48.197 00:06:48.197 ' 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.197 --rc genhtml_branch_coverage=1 00:06:48.197 --rc genhtml_function_coverage=1 00:06:48.197 --rc genhtml_legend=1 00:06:48.197 --rc geninfo_all_blocks=1 00:06:48.197 --rc geninfo_unexecuted_blocks=1 00:06:48.197 00:06:48.197 ' 00:06:48.197 16:33:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.197 16:33:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1976677 00:06:48.197 16:33:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1976677 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1976677 ']' 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.197 16:33:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.197 16:33:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.197 [2024-12-06 16:33:36.713782] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:48.197 [2024-12-06 16:33:36.713833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976677 ] 00:06:48.197 [2024-12-06 16:33:36.771151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.197 [2024-12-06 16:33:36.788051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.457 16:33:36 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.457 16:33:36 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:48.457 16:33:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:48.457 { 00:06:48.457 "version": "SPDK v25.01-pre git sha1 a5e6ecf28", 00:06:48.457 "fields": { 00:06:48.457 "major": 25, 00:06:48.457 "minor": 1, 00:06:48.457 "patch": 0, 00:06:48.457 "suffix": "-pre", 00:06:48.457 "commit": "a5e6ecf28" 00:06:48.457 } 00:06:48.457 } 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:48.457 16:33:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:48.457 16:33:37 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:48.717 request: 00:06:48.717 { 00:06:48.717 "method": "env_dpdk_get_mem_stats", 00:06:48.717 "req_id": 1 00:06:48.717 } 00:06:48.717 Got JSON-RPC error response 00:06:48.717 response: 00:06:48.717 { 00:06:48.717 "code": -32601, 00:06:48.717 "message": "Method not found" 00:06:48.717 } 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.717 16:33:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1976677 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1976677 ']' 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1976677 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1976677 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1976677' 00:06:48.717 killing process with pid 1976677 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@973 -- # kill 1976677 00:06:48.717 16:33:37 app_cmdline -- common/autotest_common.sh@978 -- # wait 1976677 00:06:48.977 00:06:48.977 real 0m0.941s 00:06:48.977 user 0m1.146s 00:06:48.977 sys 0m0.312s 00:06:48.977 16:33:37 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.977 16:33:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.977 ************************************ 00:06:48.977 END TEST app_cmdline 00:06:48.977 ************************************ 00:06:48.977 16:33:37 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:48.977 16:33:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.977 16:33:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.977 16:33:37 -- common/autotest_common.sh@10 -- # set +x 00:06:48.977 ************************************ 00:06:48.977 START TEST version 00:06:48.977 ************************************ 00:06:48.977 16:33:37 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:48.977 * Looking for test storage... 00:06:48.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:48.977 16:33:37 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.977 16:33:37 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.977 16:33:37 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.977 16:33:37 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.977 16:33:37 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.977 16:33:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.977 16:33:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.977 16:33:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.977 16:33:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.977 16:33:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.977 16:33:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.977 16:33:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.977 16:33:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.977 16:33:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.977 16:33:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.977 16:33:37 version -- scripts/common.sh@344 -- # case "$op" in 00:06:48.977 16:33:37 version -- scripts/common.sh@345 -- # : 1 00:06:48.977 16:33:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.977 16:33:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.977 16:33:37 version -- scripts/common.sh@365 -- # decimal 1 00:06:49.237 16:33:37 version -- scripts/common.sh@353 -- # local d=1 00:06:49.237 16:33:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.237 16:33:37 version -- scripts/common.sh@355 -- # echo 1 00:06:49.237 16:33:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.237 16:33:37 version -- scripts/common.sh@366 -- # decimal 2 00:06:49.237 16:33:37 version -- scripts/common.sh@353 -- # local d=2 00:06:49.237 16:33:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.237 16:33:37 version -- scripts/common.sh@355 -- # echo 2 00:06:49.237 16:33:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.237 16:33:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.237 16:33:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.237 16:33:37 version -- scripts/common.sh@368 -- # return 0 00:06:49.237 16:33:37 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.237 16:33:37 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:49.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.237 --rc genhtml_branch_coverage=1 00:06:49.237 --rc genhtml_function_coverage=1 00:06:49.237 --rc genhtml_legend=1 00:06:49.237 --rc geninfo_all_blocks=1 00:06:49.237 --rc geninfo_unexecuted_blocks=1 00:06:49.237 00:06:49.237 ' 00:06:49.237 16:33:37 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:49.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.237 --rc genhtml_branch_coverage=1 00:06:49.237 --rc genhtml_function_coverage=1 00:06:49.237 --rc genhtml_legend=1 00:06:49.237 --rc geninfo_all_blocks=1 00:06:49.237 --rc geninfo_unexecuted_blocks=1 00:06:49.237 00:06:49.237 ' 00:06:49.237 16:33:37 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:49.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.237 --rc genhtml_branch_coverage=1 00:06:49.237 --rc genhtml_function_coverage=1 00:06:49.237 --rc genhtml_legend=1 00:06:49.237 --rc geninfo_all_blocks=1 00:06:49.237 --rc geninfo_unexecuted_blocks=1 00:06:49.237 00:06:49.237 ' 00:06:49.237 16:33:37 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:49.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.237 --rc genhtml_branch_coverage=1 00:06:49.237 --rc genhtml_function_coverage=1 00:06:49.237 --rc genhtml_legend=1 00:06:49.237 --rc geninfo_all_blocks=1 00:06:49.237 --rc geninfo_unexecuted_blocks=1 00:06:49.237 00:06:49.237 ' 00:06:49.237 16:33:37 version -- app/version.sh@17 -- # get_header_version major 00:06:49.237 16:33:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # cut -f2 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.237 16:33:37 version -- app/version.sh@17 -- # major=25 00:06:49.237 16:33:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:49.237 16:33:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # cut -f2 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.237 16:33:37 version -- app/version.sh@18 -- # minor=1 00:06:49.237 16:33:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:49.237 16:33:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # cut -f2 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.237 16:33:37 version -- app/version.sh@19 -- # patch=0 00:06:49.237 16:33:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:49.237 16:33:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # cut -f2 00:06:49.237 16:33:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:49.237 16:33:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:49.237 16:33:37 version -- app/version.sh@22 -- # version=25.1 00:06:49.237 16:33:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:49.237 16:33:37 version -- app/version.sh@28 -- # version=25.1rc0 00:06:49.237 16:33:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:49.237 16:33:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:49.237 16:33:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:49.237 16:33:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:49.237 00:06:49.237 real 0m0.170s 00:06:49.237 user 0m0.098s 00:06:49.237 sys 0m0.094s 00:06:49.237 16:33:37 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.238 16:33:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:49.238 ************************************ 00:06:49.238 END TEST version 00:06:49.238 ************************************ 00:06:49.238 16:33:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:49.238 16:33:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:49.238 16:33:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:49.238 16:33:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.238 16:33:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:49.238 16:33:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:49.238 16:33:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.238 16:33:37 -- common/autotest_common.sh@10 -- # set +x 00:06:49.238 16:33:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:49.238 16:33:37 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:49.238 16:33:37 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.238 16:33:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.238 16:33:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.238 16:33:37 -- common/autotest_common.sh@10 -- # set +x 00:06:49.238 ************************************ 00:06:49.238 START TEST nvmf_tcp 00:06:49.238 ************************************ 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.238 * Looking for test storage... 00:06:49.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.238 16:33:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:49.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.238 --rc genhtml_branch_coverage=1 00:06:49.238 --rc genhtml_function_coverage=1 00:06:49.238 --rc genhtml_legend=1 00:06:49.238 --rc geninfo_all_blocks=1 00:06:49.238 --rc geninfo_unexecuted_blocks=1 00:06:49.238 00:06:49.238 ' 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:49.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.238 --rc genhtml_branch_coverage=1 00:06:49.238 --rc genhtml_function_coverage=1 00:06:49.238 --rc genhtml_legend=1 00:06:49.238 --rc geninfo_all_blocks=1 00:06:49.238 --rc geninfo_unexecuted_blocks=1 00:06:49.238 00:06:49.238 ' 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:49.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.238 --rc genhtml_branch_coverage=1 00:06:49.238 --rc genhtml_function_coverage=1 00:06:49.238 --rc genhtml_legend=1 00:06:49.238 --rc geninfo_all_blocks=1 00:06:49.238 --rc geninfo_unexecuted_blocks=1 00:06:49.238 00:06:49.238 ' 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:49.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.238 --rc genhtml_branch_coverage=1 00:06:49.238 --rc genhtml_function_coverage=1 00:06:49.238 --rc genhtml_legend=1 00:06:49.238 --rc geninfo_all_blocks=1 00:06:49.238 --rc geninfo_unexecuted_blocks=1 00:06:49.238 00:06:49.238 ' 00:06:49.238 16:33:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:49.238 16:33:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:49.238 16:33:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.238 16:33:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.498 ************************************ 00:06:49.498 START TEST nvmf_target_core 00:06:49.498 ************************************ 00:06:49.498 16:33:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:49.498 * Looking for test storage... 00:06:49.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.498 --rc genhtml_branch_coverage=1 00:06:49.498 --rc genhtml_function_coverage=1 00:06:49.498 --rc genhtml_legend=1 00:06:49.498 --rc geninfo_all_blocks=1 00:06:49.498 --rc geninfo_unexecuted_blocks=1 00:06:49.498 00:06:49.498 ' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.498 --rc genhtml_branch_coverage=1 00:06:49.498 --rc genhtml_function_coverage=1 00:06:49.498 --rc genhtml_legend=1 00:06:49.498 --rc geninfo_all_blocks=1 00:06:49.498 --rc geninfo_unexecuted_blocks=1 00:06:49.498 00:06:49.498 ' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.498 --rc genhtml_branch_coverage=1 00:06:49.498 --rc genhtml_function_coverage=1 00:06:49.498 --rc genhtml_legend=1 00:06:49.498 --rc geninfo_all_blocks=1 00:06:49.498 --rc geninfo_unexecuted_blocks=1 00:06:49.498 00:06:49.498 ' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:49.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.498 --rc genhtml_branch_coverage=1 00:06:49.498 --rc genhtml_function_coverage=1 00:06:49.498 --rc genhtml_legend=1 00:06:49.498 --rc geninfo_all_blocks=1 00:06:49.498 --rc geninfo_unexecuted_blocks=1 00:06:49.498 00:06:49.498 ' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.498 16:33:38 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.499 ************************************ 00:06:49.499 START TEST nvmf_abort 00:06:49.499 ************************************ 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:49.499 * Looking for test storage... 00:06:49.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:49.499 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.757 --rc genhtml_branch_coverage=1 00:06:49.757 --rc genhtml_function_coverage=1 00:06:49.757 --rc genhtml_legend=1 00:06:49.757 --rc geninfo_all_blocks=1 00:06:49.757 --rc geninfo_unexecuted_blocks=1 00:06:49.757 00:06:49.757 ' 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.757 --rc genhtml_branch_coverage=1 00:06:49.757 --rc genhtml_function_coverage=1 00:06:49.757 --rc genhtml_legend=1 00:06:49.757 --rc geninfo_all_blocks=1 00:06:49.757 --rc geninfo_unexecuted_blocks=1 00:06:49.757 00:06:49.757 ' 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.757 --rc genhtml_branch_coverage=1 00:06:49.757 --rc genhtml_function_coverage=1 00:06:49.757 --rc genhtml_legend=1 00:06:49.757 --rc geninfo_all_blocks=1 00:06:49.757 --rc geninfo_unexecuted_blocks=1 00:06:49.757 00:06:49.757 ' 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:49.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.757 --rc genhtml_branch_coverage=1 00:06:49.757 --rc genhtml_function_coverage=1 00:06:49.757 --rc genhtml_legend=1 00:06:49.757 --rc geninfo_all_blocks=1 00:06:49.757 --rc geninfo_unexecuted_blocks=1 00:06:49.757 00:06:49.757 ' 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:49.757 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:49.758 16:33:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:55.203 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:55.204 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:55.204 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:55.204 Found net devices under 0000:31:00.0: cvl_0_0 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:55.204 Found net devices under 0000:31:00.1: cvl_0_1 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:55.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:06:55.204 00:06:55.204 --- 10.0.0.2 ping statistics --- 00:06:55.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.204 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:06:55.204 00:06:55.204 --- 10.0.0.1 ping statistics --- 00:06:55.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.204 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:55.204 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1981175 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1981175 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1981175 ']' 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:55.464 16:33:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:55.464 [2024-12-06 16:33:43.963948] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:06:55.464 [2024-12-06 16:33:43.964017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.464 [2024-12-06 16:33:44.058338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.464 [2024-12-06 16:33:44.087858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.464 [2024-12-06 16:33:44.087908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.464 [2024-12-06 16:33:44.087917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.464 [2024-12-06 16:33:44.087925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.464 [2024-12-06 16:33:44.087932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.464 [2024-12-06 16:33:44.089987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.464 [2024-12-06 16:33:44.090158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.464 [2024-12-06 16:33:44.090194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.401 [2024-12-06 16:33:44.809571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.401 Malloc0 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.401 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.401 Delay0 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.402 [2024-12-06 16:33:44.884608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.402 16:33:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:56.402 [2024-12-06 16:33:44.948958] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:58.938 Initializing NVMe Controllers 00:06:58.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:58.938 controller IO queue size 128 less than required 00:06:58.938 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:58.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:58.938 Initialization complete. Launching workers. 00:06:58.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28539 00:06:58.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28600, failed to submit 62 00:06:58.938 success 28543, unsuccessful 57, failed 0 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:58.938 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:58.939 rmmod nvme_tcp 00:06:58.939 rmmod nvme_fabrics 00:06:58.939 rmmod nvme_keyring 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1981175 ']' 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1981175 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1981175 ']' 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1981175 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1981175 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1981175' 00:06:58.939 killing process with pid 1981175 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1981175 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1981175 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.939 16:33:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.844 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.844 00:07:00.844 real 0m11.186s 00:07:00.844 user 0m12.719s 00:07:00.844 sys 0m5.108s 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:00.845 ************************************ 00:07:00.845 END TEST nvmf_abort 00:07:00.845 ************************************ 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.845 ************************************ 00:07:00.845 START TEST nvmf_ns_hotplug_stress 00:07:00.845 ************************************ 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:00.845 * Looking for test storage... 00:07:00.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.845 --rc genhtml_branch_coverage=1 00:07:00.845 --rc genhtml_function_coverage=1 00:07:00.845 --rc genhtml_legend=1 00:07:00.845 --rc geninfo_all_blocks=1 00:07:00.845 --rc geninfo_unexecuted_blocks=1 00:07:00.845 00:07:00.845 ' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.845 --rc genhtml_branch_coverage=1 00:07:00.845 --rc genhtml_function_coverage=1 00:07:00.845 --rc genhtml_legend=1 00:07:00.845 --rc geninfo_all_blocks=1 00:07:00.845 --rc geninfo_unexecuted_blocks=1 00:07:00.845 00:07:00.845 ' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.845 --rc genhtml_branch_coverage=1 00:07:00.845 --rc genhtml_function_coverage=1 00:07:00.845 --rc genhtml_legend=1 00:07:00.845 --rc geninfo_all_blocks=1 00:07:00.845 --rc geninfo_unexecuted_blocks=1 00:07:00.845 00:07:00.845 ' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.845 --rc genhtml_branch_coverage=1 00:07:00.845 --rc genhtml_function_coverage=1 00:07:00.845 --rc genhtml_legend=1 00:07:00.845 --rc geninfo_all_blocks=1 00:07:00.845 --rc geninfo_unexecuted_blocks=1 00:07:00.845 00:07:00.845 ' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.845 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.846 16:33:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:06.117 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:06.117 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.117 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:06.117 Found net devices under 0000:31:00.0: cvl_0_0 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:06.118 Found net devices under 0000:31:00.1: cvl_0_1 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:06.118 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:06.378 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:06.378 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:06.378 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:06.378 16:33:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:06.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:06.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:07:06.378 00:07:06.378 --- 10.0.0.2 ping statistics --- 00:07:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.378 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:06.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:06.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:07:06.378 00:07:06.378 --- 10.0.0.1 ping statistics --- 00:07:06.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:06.378 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1986220 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1986220 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1986220 ']' 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.378 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:06.637 [2024-12-06 16:33:55.098457] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:07:06.637 [2024-12-06 16:33:55.098512] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.637 [2024-12-06 16:33:55.185346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.637 [2024-12-06 16:33:55.212518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.637 [2024-12-06 16:33:55.212568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.637 [2024-12-06 16:33:55.212577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.637 [2024-12-06 16:33:55.212584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.637 [2024-12-06 16:33:55.212591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.637 [2024-12-06 16:33:55.214606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.637 [2024-12-06 16:33:55.214769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.637 [2024-12-06 16:33:55.214770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.204 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.204 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:07.204 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:07.204 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.204 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:07.462 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.462 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:07.462 16:33:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:07.462 [2024-12-06 16:33:56.049017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.462 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:07.720 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.720 [2024-12-06 16:33:56.370384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.720 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:07.978 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:08.237 Malloc0 00:07:08.237 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:08.237 Delay0 00:07:08.237 16:33:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.496 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:08.496 NULL1 00:07:08.755 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:08.755 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1986725 00:07:08.755 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:08.755 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.755 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:09.013 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.013 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:09.013 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:09.271 [2024-12-06 16:33:57.823142] bdev.c:5432:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:07:09.271 true 00:07:09.271 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:09.271 16:33:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.530 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.530 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:09.530 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:09.789 true 00:07:09.789 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:09.789 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.789 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.048 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:10.048 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:10.307 true 00:07:10.307 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:10.307 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.307 16:33:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.565 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:10.565 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:10.825 true 00:07:10.825 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:10.825 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.825 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.084 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:11.084 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:11.085 true 00:07:11.085 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:11.085 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.344 16:33:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.604 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:11.604 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:11.604 true 00:07:11.604 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:11.604 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.864 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.864 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:11.864 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:12.123 true 00:07:12.123 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:12.123 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.382 16:34:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.382 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:12.382 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:12.641 true 00:07:12.641 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:12.641 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.900 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.900 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:12.900 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:13.159 true 00:07:13.159 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:13.159 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.159 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.418 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:13.418 16:34:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:13.677 true 00:07:13.677 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:13.677 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.677 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.936 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:13.936 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:13.936 true 00:07:14.196 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:14.196 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.196 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.455 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:14.455 16:34:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:14.455 true 00:07:14.455 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:14.455 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.714 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.973 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:14.973 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:14.973 true 00:07:14.973 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:14.973 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.233 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.492 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:15.492 16:34:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:15.492 true 00:07:15.492 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:15.493 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.751 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.751 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:15.752 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:16.009 true 00:07:16.009 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:16.009 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.268 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.268 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:16.268 16:34:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:16.526 true 00:07:16.526 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:16.526 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.526 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.784 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:16.784 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:17.043 true 00:07:17.043 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:17.043 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.043 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.301 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:17.301 16:34:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:17.301 true 00:07:17.560 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:17.560 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.560 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.819 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:17.819 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:17.819 true 00:07:17.819 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:17.819 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.078 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.336 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:18.336 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:18.336 true 00:07:18.336 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:18.336 16:34:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.598 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.857 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:18.857 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:18.857 true 00:07:18.857 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:18.857 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.115 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.374 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:19.374 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:19.374 true 00:07:19.375 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:19.375 16:34:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.633 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.633 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:19.633 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:19.891 true 00:07:19.891 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:19.891 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.150 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.150 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:20.150 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:20.408 true 00:07:20.409 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:20.409 16:34:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.409 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.667 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:20.667 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:20.926 true 00:07:20.926 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:20.926 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.926 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.185 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:21.185 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:21.443 true 00:07:21.443 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:21.443 16:34:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.443 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.703 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:21.703 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:21.703 true 00:07:21.703 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:21.703 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.962 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.222 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:22.222 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:22.222 true 00:07:22.222 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:22.222 16:34:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.483 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.741 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:22.741 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:22.741 true 00:07:22.741 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:22.741 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.998 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.998 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:22.998 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:23.256 true 00:07:23.256 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:23.256 16:34:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.515 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.515 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:23.515 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:23.773 true 00:07:23.773 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:23.773 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.032 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.032 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:24.032 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:24.292 true 00:07:24.292 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:24.292 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.292 16:34:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.551 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:24.551 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:24.811 true 00:07:24.811 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:24.811 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.811 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.071 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:25.071 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:25.071 true 00:07:25.330 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:25.330 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.330 16:34:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.590 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:25.590 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:25.590 true 00:07:25.590 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:25.590 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.850 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.108 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:26.108 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:26.108 true 00:07:26.108 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:26.108 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.367 16:34:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.367 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:26.367 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:26.630 true 00:07:26.630 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:26.630 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.891 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.891 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:26.891 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:27.151 true 00:07:27.151 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:27.151 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.409 16:34:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.409 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:27.409 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:27.667 true 00:07:27.667 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:27.667 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.667 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.924 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:27.924 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:28.183 true 00:07:28.183 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:28.183 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.183 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.441 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:28.441 16:34:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:28.441 true 00:07:28.699 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:28.699 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.699 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.958 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:28.958 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:29.216 true 00:07:29.216 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:29.216 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.216 16:34:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.474 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:29.474 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:29.474 true 00:07:29.731 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:29.731 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.731 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.990 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:29.990 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:29.990 true 00:07:29.990 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:29.990 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.249 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.508 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:30.508 16:34:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:30.508 true 00:07:30.508 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:30.508 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.766 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.766 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:30.766 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:31.025 true 00:07:31.026 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:31.026 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.285 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.285 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:31.285 16:34:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:31.543 true 00:07:31.543 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:31.543 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.803 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.803 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:31.803 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:32.063 true 00:07:32.063 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:32.063 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.063 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.321 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:32.322 16:34:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:32.581 true 00:07:32.581 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:32.581 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.581 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.839 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:32.840 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:32.840 true 00:07:33.098 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:33.098 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.098 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.357 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:33.357 16:34:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:33.357 true 00:07:33.357 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:33.357 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.616 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.876 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:33.876 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:33.876 true 00:07:33.876 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:33.876 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.136 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.136 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:34.136 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:34.394 true 00:07:34.394 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:34.394 16:34:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.653 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.653 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:34.653 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:34.912 true 00:07:34.912 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:34.912 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.170 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.170 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:35.170 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:35.429 true 00:07:35.429 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:35.429 16:34:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.429 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.722 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:35.722 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:35.722 true 00:07:36.064 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:36.064 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.064 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.324 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:07:36.324 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:07:36.324 true 00:07:36.324 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:36.324 16:34:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.584 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.584 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:07:36.584 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:07:36.843 true 00:07:36.843 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:36.843 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.843 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.103 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:07:37.103 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:07:37.362 true 00:07:37.362 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:37.362 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.362 16:34:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.620 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:07:37.620 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:07:37.620 true 00:07:37.620 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:37.620 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.878 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.136 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:07:38.136 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:07:38.136 true 00:07:38.136 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:38.136 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.395 16:34:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.654 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:07:38.654 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:07:38.654 true 00:07:38.654 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:38.654 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.913 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.913 Initializing NVMe Controllers 00:07:38.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:38.913 Controller IO queue size 128, less than required. 00:07:38.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:38.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:38.913 Initialization complete. Launching workers. 00:07:38.913 ======================================================== 00:07:38.913 Latency(us) 00:07:38.913 Device Information : IOPS MiB/s Average min max 00:07:38.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30787.37 15.03 4157.50 1084.34 8182.21 00:07:38.913 ======================================================== 00:07:38.913 Total : 30787.37 15.03 4157.50 1084.34 8182.21 00:07:38.913 00:07:38.913 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:07:38.913 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:07:39.171 true 00:07:39.171 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1986725 00:07:39.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1986725) - No such process 00:07:39.171 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1986725 00:07:39.171 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.430 16:34:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:39.430 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:39.430 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:39.430 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:39.430 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:39.430 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:39.689 null0 00:07:39.689 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:39.689 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:39.689 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:39.689 null1 00:07:39.949 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:39.949 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:39.949 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:39.949 null2 00:07:39.949 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:39.949 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:39.949 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:40.208 null3 00:07:40.208 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:40.208 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.208 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:40.208 null4 00:07:40.208 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:40.208 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.208 16:34:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:40.466 null5 00:07:40.466 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:40.466 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.466 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:40.727 null6 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:40.727 null7 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1994131 1994132 1994133 1994136 1994138 1994140 1994141 1994144 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.727 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:40.986 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.245 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.504 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:41.763 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.022 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.281 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.541 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.542 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.542 16:34:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:42.542 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:42.801 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.802 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:42.802 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.802 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.802 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:42.802 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:42.802 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:42.802 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:43.062 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:43.063 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.322 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:43.323 16:34:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:43.581 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:43.582 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:43.841 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.100 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.387 rmmod nvme_tcp 00:07:44.387 rmmod nvme_fabrics 00:07:44.387 rmmod nvme_keyring 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1986220 ']' 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1986220 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1986220 ']' 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1986220 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.387 16:34:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1986220 00:07:44.387 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:44.387 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:44.387 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1986220' 00:07:44.387 killing process with pid 1986220 00:07:44.387 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1986220 00:07:44.387 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1986220 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.646 16:34:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.555 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:46.555 00:07:46.555 real 0m45.805s 00:07:46.555 user 3m13.072s 00:07:46.555 sys 0m15.022s 00:07:46.555 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.555 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:46.555 ************************************ 00:07:46.555 END TEST nvmf_ns_hotplug_stress 00:07:46.555 ************************************ 00:07:46.556 16:34:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:46.556 16:34:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.556 16:34:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.556 16:34:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.556 ************************************ 00:07:46.556 START TEST nvmf_delete_subsystem 00:07:46.556 ************************************ 00:07:46.556 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:46.815 * Looking for test storage... 00:07:46.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.815 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.816 --rc genhtml_branch_coverage=1 00:07:46.816 --rc genhtml_function_coverage=1 00:07:46.816 --rc genhtml_legend=1 00:07:46.816 --rc geninfo_all_blocks=1 00:07:46.816 --rc geninfo_unexecuted_blocks=1 00:07:46.816 00:07:46.816 ' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.816 --rc genhtml_branch_coverage=1 00:07:46.816 --rc genhtml_function_coverage=1 00:07:46.816 --rc genhtml_legend=1 00:07:46.816 --rc geninfo_all_blocks=1 00:07:46.816 --rc geninfo_unexecuted_blocks=1 00:07:46.816 00:07:46.816 ' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.816 --rc genhtml_branch_coverage=1 00:07:46.816 --rc genhtml_function_coverage=1 00:07:46.816 --rc genhtml_legend=1 00:07:46.816 --rc geninfo_all_blocks=1 00:07:46.816 --rc geninfo_unexecuted_blocks=1 00:07:46.816 00:07:46.816 ' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.816 --rc genhtml_branch_coverage=1 00:07:46.816 --rc genhtml_function_coverage=1 00:07:46.816 --rc genhtml_legend=1 00:07:46.816 --rc geninfo_all_blocks=1 00:07:46.816 --rc geninfo_unexecuted_blocks=1 00:07:46.816 00:07:46.816 ' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.816 16:34:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.084 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:52.085 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:52.085 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:52.085 Found net devices under 0000:31:00.0: cvl_0_0 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:52.085 Found net devices under 0000:31:00.1: cvl_0_1 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.085 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.344 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.344 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.344 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.344 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:07:52.344 00:07:52.344 --- 10.0.0.2 ping statistics --- 00:07:52.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.344 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:07:52.344 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:07:52.344 00:07:52.345 --- 10.0.0.1 ping statistics --- 00:07:52.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.345 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1999623 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1999623 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1999623 ']' 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.345 16:34:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.345 [2024-12-06 16:34:40.906438] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:07:52.345 [2024-12-06 16:34:40.906507] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.345 [2024-12-06 16:34:40.989507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.345 [2024-12-06 16:34:41.007238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.345 [2024-12-06 16:34:41.007271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.345 [2024-12-06 16:34:41.007279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.345 [2024-12-06 16:34:41.007286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.345 [2024-12-06 16:34:41.007292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.345 [2024-12-06 16:34:41.008602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.345 [2024-12-06 16:34:41.008605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 [2024-12-06 16:34:41.110282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 [2024-12-06 16:34:41.126464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 NULL1 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 Delay0 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1999668 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:52.603 16:34:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:52.603 [2024-12-06 16:34:41.201148] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:54.503 16:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:54.503 16:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.503 16:34:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 [2024-12-06 16:34:43.368299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7febdc000c40 is same with the state(6) to be set 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Write completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 starting I/O failed: -6 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.761 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 starting I/O failed: -6 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 [2024-12-06 16:34:43.368814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf5f0 is same with the state(6) to be set 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Write completed with error (sct=0, sc=8) 00:07:54.762 Read completed with error (sct=0, sc=8) 00:07:55.695 [2024-12-06 16:34:44.343078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2268260 is same with the state(6) to be set 00:07:55.695 Write completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Write completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.695 Write completed with error (sct=0, sc=8) 00:07:55.695 Write completed with error (sct=0, sc=8) 00:07:55.695 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 [2024-12-06 16:34:44.370223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7febdc00d020 is same with the state(6) to be set 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 [2024-12-06 16:34:44.370550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226b0e0 is same with the state(6) to be set 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 [2024-12-06 16:34:44.370668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bf920 is same with the state(6) to be set 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Write completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 Read completed with error (sct=0, sc=8) 00:07:55.696 [2024-12-06 16:34:44.370866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7febdc00d7c0 is same with the state(6) to be set 00:07:55.696 Initializing NVMe Controllers 00:07:55.696 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:55.696 Controller IO queue size 128, less than required. 00:07:55.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:55.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:55.696 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:55.696 Initialization complete. Launching workers. 00:07:55.696 ======================================================== 00:07:55.696 Latency(us) 00:07:55.696 Device Information : IOPS MiB/s Average min max 00:07:55.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.58 0.08 925270.48 212.59 2001127.15 00:07:55.696 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.12 0.08 952535.65 259.72 2001842.72 00:07:55.696 ======================================================== 00:07:55.696 Total : 334.70 0.16 938639.73 212.59 2001842.72 00:07:55.696 00:07:55.696 [2024-12-06 16:34:44.371555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2268260 (9): Bad file descriptor 00:07:55.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:55.696 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.696 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:55.696 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1999668 00:07:55.696 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1999668 00:07:56.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1999668) - No such process 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1999668 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1999668 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1999668 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.262 [2024-12-06 16:34:44.893832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2000359 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:56.262 16:34:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:56.262 [2024-12-06 16:34:44.952451] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:56.827 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:56.827 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:56.827 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.393 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.393 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:57.393 16:34:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:57.959 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:57.959 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:57.959 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.526 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.526 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:58.526 16:34:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:58.785 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:58.785 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:58.785 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.352 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.352 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:59.352 16:34:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:59.613 Initializing NVMe Controllers 00:07:59.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:59.613 Controller IO queue size 128, less than required. 00:07:59.613 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:59.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:59.613 Initialization complete. Launching workers. 00:07:59.613 ======================================================== 00:07:59.613 Latency(us) 00:07:59.613 Device Information : IOPS MiB/s Average min max 00:07:59.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002809.93 1000249.50 1006902.25 00:07:59.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001788.64 1000200.51 1004288.34 00:07:59.614 ======================================================== 00:07:59.614 Total : 256.00 0.12 1002299.28 1000200.51 1006902.25 00:07:59.614 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2000359 00:07:59.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2000359) - No such process 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2000359 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:59.878 rmmod nvme_tcp 00:07:59.878 rmmod nvme_fabrics 00:07:59.878 rmmod nvme_keyring 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1999623 ']' 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1999623 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1999623 ']' 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1999623 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1999623 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1999623' 00:07:59.878 killing process with pid 1999623 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1999623 00:07:59.878 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1999623 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.135 16:34:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.038 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.038 00:08:02.038 real 0m15.479s 00:08:02.038 user 0m28.320s 00:08:02.038 sys 0m5.312s 00:08:02.038 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.038 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.038 ************************************ 00:08:02.038 END TEST nvmf_delete_subsystem 00:08:02.038 ************************************ 00:08:02.038 16:34:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:02.038 16:34:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.038 16:34:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.038 16:34:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.297 ************************************ 00:08:02.297 START TEST nvmf_host_management 00:08:02.297 ************************************ 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:02.297 * Looking for test storage... 00:08:02.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:02.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.297 --rc genhtml_branch_coverage=1 00:08:02.297 --rc genhtml_function_coverage=1 00:08:02.297 --rc genhtml_legend=1 00:08:02.297 --rc geninfo_all_blocks=1 00:08:02.297 --rc geninfo_unexecuted_blocks=1 00:08:02.297 00:08:02.297 ' 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:02.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.297 --rc genhtml_branch_coverage=1 00:08:02.297 --rc genhtml_function_coverage=1 00:08:02.297 --rc genhtml_legend=1 00:08:02.297 --rc geninfo_all_blocks=1 00:08:02.297 --rc geninfo_unexecuted_blocks=1 00:08:02.297 00:08:02.297 ' 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:02.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.297 --rc genhtml_branch_coverage=1 00:08:02.297 --rc genhtml_function_coverage=1 00:08:02.297 --rc genhtml_legend=1 00:08:02.297 --rc geninfo_all_blocks=1 00:08:02.297 --rc geninfo_unexecuted_blocks=1 00:08:02.297 00:08:02.297 ' 00:08:02.297 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:02.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.297 --rc genhtml_branch_coverage=1 00:08:02.297 --rc genhtml_function_coverage=1 00:08:02.297 --rc genhtml_legend=1 00:08:02.297 --rc geninfo_all_blocks=1 00:08:02.297 --rc geninfo_unexecuted_blocks=1 00:08:02.297 00:08:02.297 ' 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.298 16:34:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.572 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.572 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.573 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.573 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.573 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.573 16:34:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:07.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:08:07.573 00:08:07.573 --- 10.0.0.2 ping statistics --- 00:08:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.573 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:08:07.573 00:08:07.573 --- 10.0.0.1 ping statistics --- 00:08:07.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.573 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2005704 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2005704 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2005704 ']' 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:07.573 [2024-12-06 16:34:56.261902] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:07.573 [2024-12-06 16:34:56.261966] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.832 [2024-12-06 16:34:56.338403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.832 [2024-12-06 16:34:56.359792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.832 [2024-12-06 16:34:56.359831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.832 [2024-12-06 16:34:56.359837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.833 [2024-12-06 16:34:56.359843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.833 [2024-12-06 16:34:56.359850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.833 [2024-12-06 16:34:56.361642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.833 [2024-12-06 16:34:56.361803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.833 [2024-12-06 16:34:56.361940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.833 [2024-12-06 16:34:56.361942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.833 [2024-12-06 16:34:56.463742] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.833 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:07.833 Malloc0 00:08:08.092 [2024-12-06 16:34:56.527098] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2005745 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2005745 /var/tmp/bdevperf.sock 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2005745 ']' 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:08.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:08.092 { 00:08:08.092 "params": { 00:08:08.092 "name": "Nvme$subsystem", 00:08:08.092 "trtype": "$TEST_TRANSPORT", 00:08:08.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:08.092 "adrfam": "ipv4", 00:08:08.092 "trsvcid": "$NVMF_PORT", 00:08:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:08.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:08.092 "hdgst": ${hdgst:-false}, 00:08:08.092 "ddgst": ${ddgst:-false} 00:08:08.092 }, 00:08:08.092 "method": "bdev_nvme_attach_controller" 00:08:08.092 } 00:08:08.092 EOF 00:08:08.092 )") 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:08.092 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:08.092 "params": { 00:08:08.092 "name": "Nvme0", 00:08:08.092 "trtype": "tcp", 00:08:08.092 "traddr": "10.0.0.2", 00:08:08.092 "adrfam": "ipv4", 00:08:08.092 "trsvcid": "4420", 00:08:08.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:08.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:08.092 "hdgst": false, 00:08:08.092 "ddgst": false 00:08:08.092 }, 00:08:08.092 "method": "bdev_nvme_attach_controller" 00:08:08.092 }' 00:08:08.092 [2024-12-06 16:34:56.598532] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:08.092 [2024-12-06 16:34:56.598580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005745 ] 00:08:08.092 [2024-12-06 16:34:56.675661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.092 [2024-12-06 16:34:56.693933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.351 Running I/O for 10 seconds... 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:08.351 16:34:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.612 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.612 [2024-12-06 16:34:57.253257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.253367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43200 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.254063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:08.612 [2024-12-06 16:34:57.254107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.254118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:08.612 [2024-12-06 16:34:57.254126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.254134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:08.612 [2024-12-06 16:34:57.254142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.254150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:08.612 [2024-12-06 16:34:57.254158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.254165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x988710 is same with the state(6) to be set 00:08:08.612 [2024-12-06 16:34:57.255031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.612 [2024-12-06 16:34:57.255401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.612 [2024-12-06 16:34:57.255409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.255986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.255993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.256164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:08.613 [2024-12-06 16:34:57.256172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:08.613 [2024-12-06 16:34:57.257391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:08.613 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.613 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:08.613 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.613 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:08.613 task offset: 73728 on job bdev=Nvme0n1 fails 00:08:08.613 00:08:08.613 Latency(us) 00:08:08.613 [2024-12-06T15:34:57.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.613 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:08.613 Job: Nvme0n1 ended in about 0.38 seconds with error 00:08:08.613 Verification LBA range: start 0x0 length 0x400 00:08:08.613 Nvme0n1 : 0.38 1497.13 93.57 166.35 0.00 37225.33 1590.61 34952.53 00:08:08.613 [2024-12-06T15:34:57.306Z] =================================================================================================================== 00:08:08.613 [2024-12-06T15:34:57.306Z] Total : 1497.13 93.57 166.35 0.00 37225.33 1590.61 34952.53 00:08:08.613 [2024-12-06 16:34:57.259442] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.613 [2024-12-06 16:34:57.259466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x988710 (9): Bad file descriptor 00:08:08.613 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.613 16:34:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:08.613 [2024-12-06 16:34:57.272144] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:09.989 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2005745 00:08:09.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2005745) - No such process 00:08:09.989 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:09.990 { 00:08:09.990 "params": { 00:08:09.990 "name": "Nvme$subsystem", 00:08:09.990 "trtype": "$TEST_TRANSPORT", 00:08:09.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.990 "adrfam": "ipv4", 00:08:09.990 "trsvcid": "$NVMF_PORT", 00:08:09.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.990 "hdgst": ${hdgst:-false}, 00:08:09.990 "ddgst": ${ddgst:-false} 00:08:09.990 }, 00:08:09.990 "method": "bdev_nvme_attach_controller" 00:08:09.990 } 00:08:09.990 EOF 00:08:09.990 )") 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:09.990 16:34:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:09.990 "params": { 00:08:09.990 "name": "Nvme0", 00:08:09.990 "trtype": "tcp", 00:08:09.990 "traddr": "10.0.0.2", 00:08:09.990 "adrfam": "ipv4", 00:08:09.990 "trsvcid": "4420", 00:08:09.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:09.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:09.990 "hdgst": false, 00:08:09.990 "ddgst": false 00:08:09.990 }, 00:08:09.990 "method": "bdev_nvme_attach_controller" 00:08:09.990 }' 00:08:09.990 [2024-12-06 16:34:58.302564] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:09.990 [2024-12-06 16:34:58.302618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006103 ] 00:08:09.990 [2024-12-06 16:34:58.380010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.990 [2024-12-06 16:34:58.397067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.990 Running I/O for 1 seconds... 00:08:10.927 1600.00 IOPS, 100.00 MiB/s 00:08:10.927 Latency(us) 00:08:10.927 [2024-12-06T15:34:59.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.927 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:10.927 Verification LBA range: start 0x0 length 0x400 00:08:10.927 Nvme0n1 : 1.01 1650.67 103.17 0.00 0.00 38091.17 5597.87 32549.55 00:08:10.927 [2024-12-06T15:34:59.620Z] =================================================================================================================== 00:08:10.927 [2024-12-06T15:34:59.620Z] Total : 1650.67 103.17 0.00 0.00 38091.17 5597.87 32549.55 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.185 rmmod nvme_tcp 00:08:11.185 rmmod nvme_fabrics 00:08:11.185 rmmod nvme_keyring 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2005704 ']' 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2005704 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2005704 ']' 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2005704 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2005704 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2005704' 00:08:11.185 killing process with pid 2005704 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2005704 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2005704 00:08:11.185 [2024-12-06 16:34:59.853823] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.185 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.444 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.444 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.444 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.444 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.444 16:34:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:13.363 00:08:13.363 real 0m11.194s 00:08:13.363 user 0m17.331s 00:08:13.363 sys 0m4.857s 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.363 ************************************ 00:08:13.363 END TEST nvmf_host_management 00:08:13.363 ************************************ 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.363 ************************************ 00:08:13.363 START TEST nvmf_lvol 00:08:13.363 ************************************ 00:08:13.363 16:35:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:13.363 * Looking for test storage... 00:08:13.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.363 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:13.363 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:08:13.363 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.622 --rc genhtml_branch_coverage=1 00:08:13.622 --rc genhtml_function_coverage=1 00:08:13.622 --rc genhtml_legend=1 00:08:13.622 --rc geninfo_all_blocks=1 00:08:13.622 --rc geninfo_unexecuted_blocks=1 00:08:13.622 00:08:13.622 ' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.622 --rc genhtml_branch_coverage=1 00:08:13.622 --rc genhtml_function_coverage=1 00:08:13.622 --rc genhtml_legend=1 00:08:13.622 --rc geninfo_all_blocks=1 00:08:13.622 --rc geninfo_unexecuted_blocks=1 00:08:13.622 00:08:13.622 ' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.622 --rc genhtml_branch_coverage=1 00:08:13.622 --rc genhtml_function_coverage=1 00:08:13.622 --rc genhtml_legend=1 00:08:13.622 --rc geninfo_all_blocks=1 00:08:13.622 --rc geninfo_unexecuted_blocks=1 00:08:13.622 00:08:13.622 ' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:13.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.622 --rc genhtml_branch_coverage=1 00:08:13.622 --rc genhtml_function_coverage=1 00:08:13.622 --rc genhtml_legend=1 00:08:13.622 --rc geninfo_all_blocks=1 00:08:13.622 --rc geninfo_unexecuted_blocks=1 00:08:13.622 00:08:13.622 ' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.622 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.623 16:35:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:18.897 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:18.897 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:18.897 Found net devices under 0000:31:00.0: cvl_0_0 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:18.897 Found net devices under 0000:31:00.1: cvl_0_1 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.897 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:18.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:08:18.898 00:08:18.898 --- 10.0.0.2 ping statistics --- 00:08:18.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.898 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:08:18.898 00:08:18.898 --- 10.0.0.1 ping statistics --- 00:08:18.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.898 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2010795 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2010795 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2010795 ']' 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:18.898 16:35:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:18.898 [2024-12-06 16:35:07.573389] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:18.898 [2024-12-06 16:35:07.573442] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.158 [2024-12-06 16:35:07.658545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.158 [2024-12-06 16:35:07.684695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.158 [2024-12-06 16:35:07.684742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.158 [2024-12-06 16:35:07.684751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.158 [2024-12-06 16:35:07.684758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.158 [2024-12-06 16:35:07.684764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.158 [2024-12-06 16:35:07.686394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.158 [2024-12-06 16:35:07.686552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.158 [2024-12-06 16:35:07.686554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.726 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.726 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:19.726 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:19.726 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:19.726 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.726 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.726 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:19.986 [2024-12-06 16:35:08.536896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.986 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.245 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:20.245 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:20.245 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:20.245 16:35:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:20.505 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:20.764 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a01a0324-a49c-4315-a77d-9c0ff49c89d4 00:08:20.764 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a01a0324-a49c-4315-a77d-9c0ff49c89d4 lvol 20 00:08:20.764 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=153b3b5f-23d9-4d20-8555-8754fe444358 00:08:20.764 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:21.023 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 153b3b5f-23d9-4d20-8555-8754fe444358 00:08:21.282 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.282 [2024-12-06 16:35:09.905734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.282 16:35:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.541 16:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2011489 00:08:21.541 16:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:21.542 16:35:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:22.481 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 153b3b5f-23d9-4d20-8555-8754fe444358 MY_SNAPSHOT 00:08:22.755 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a9f850fa-c085-45ca-8dc7-5fd0cda9ce29 00:08:22.755 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 153b3b5f-23d9-4d20-8555-8754fe444358 30 00:08:23.015 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a9f850fa-c085-45ca-8dc7-5fd0cda9ce29 MY_CLONE 00:08:23.015 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2e621804-61df-4f30-82c0-dd2e25603878 00:08:23.015 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2e621804-61df-4f30-82c0-dd2e25603878 00:08:23.274 16:35:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2011489 00:08:33.437 Initializing NVMe Controllers 00:08:33.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:33.437 Controller IO queue size 128, less than required. 00:08:33.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:33.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:33.437 Initialization complete. Launching workers. 00:08:33.437 ======================================================== 00:08:33.437 Latency(us) 00:08:33.437 Device Information : IOPS MiB/s Average min max 00:08:33.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17212.40 67.24 7438.00 3001.01 46431.77 00:08:33.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17373.30 67.86 7369.19 1280.39 47999.50 00:08:33.437 ======================================================== 00:08:33.437 Total : 34585.70 135.10 7403.44 1280.39 47999.50 00:08:33.437 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 153b3b5f-23d9-4d20-8555-8754fe444358 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a01a0324-a49c-4315-a77d-9c0ff49c89d4 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.437 rmmod nvme_tcp 00:08:33.437 rmmod nvme_fabrics 00:08:33.437 rmmod nvme_keyring 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2010795 ']' 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2010795 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2010795 ']' 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2010795 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2010795 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2010795' 00:08:33.437 killing process with pid 2010795 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2010795 00:08:33.437 16:35:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2010795 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.437 16:35:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.818 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.818 00:08:34.818 real 0m21.124s 00:08:34.818 user 1m2.094s 00:08:34.818 sys 0m6.815s 00:08:34.818 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.818 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.818 ************************************ 00:08:34.818 END TEST nvmf_lvol 00:08:34.818 ************************************ 00:08:34.818 16:35:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.818 16:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.818 16:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.819 ************************************ 00:08:34.819 START TEST nvmf_lvs_grow 00:08:34.819 ************************************ 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:34.819 * Looking for test storage... 00:08:34.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:34.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.819 --rc genhtml_branch_coverage=1 00:08:34.819 --rc genhtml_function_coverage=1 00:08:34.819 --rc genhtml_legend=1 00:08:34.819 --rc geninfo_all_blocks=1 00:08:34.819 --rc geninfo_unexecuted_blocks=1 00:08:34.819 00:08:34.819 ' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:34.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.819 --rc genhtml_branch_coverage=1 00:08:34.819 --rc genhtml_function_coverage=1 00:08:34.819 --rc genhtml_legend=1 00:08:34.819 --rc geninfo_all_blocks=1 00:08:34.819 --rc geninfo_unexecuted_blocks=1 00:08:34.819 00:08:34.819 ' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:34.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.819 --rc genhtml_branch_coverage=1 00:08:34.819 --rc genhtml_function_coverage=1 00:08:34.819 --rc genhtml_legend=1 00:08:34.819 --rc geninfo_all_blocks=1 00:08:34.819 --rc geninfo_unexecuted_blocks=1 00:08:34.819 00:08:34.819 ' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:34.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.819 --rc genhtml_branch_coverage=1 00:08:34.819 --rc genhtml_function_coverage=1 00:08:34.819 --rc genhtml_legend=1 00:08:34.819 --rc geninfo_all_blocks=1 00:08:34.819 --rc geninfo_unexecuted_blocks=1 00:08:34.819 00:08:34.819 ' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.819 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:34.820 16:35:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.096 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:40.097 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:40.097 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:40.097 Found net devices under 0000:31:00.0: cvl_0_0 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:40.097 Found net devices under 0000:31:00.1: cvl_0_1 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:08:40.097 00:08:40.097 --- 10.0.0.2 ping statistics --- 00:08:40.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.097 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:08:40.097 00:08:40.097 --- 10.0.0.1 ping statistics --- 00:08:40.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.097 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2018204 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2018204 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2018204 ']' 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.097 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.098 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.098 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.098 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.098 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:40.098 [2024-12-06 16:35:28.741096] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:40.098 [2024-12-06 16:35:28.741192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.356 [2024-12-06 16:35:28.816450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.356 [2024-12-06 16:35:28.836852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.356 [2024-12-06 16:35:28.836893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.356 [2024-12-06 16:35:28.836900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.356 [2024-12-06 16:35:28.836905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.356 [2024-12-06 16:35:28.836910] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.356 [2024-12-06 16:35:28.837485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.356 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.356 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:40.356 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.356 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.356 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.356 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.356 16:35:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.616 [2024-12-06 16:35:29.076520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.616 ************************************ 00:08:40.616 START TEST lvs_grow_clean 00:08:40.616 ************************************ 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.616 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.874 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:40.874 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:40.874 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:41.131 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:41.131 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:41.131 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4fe74609-28b5-473e-a8c8-ab81bf744455 lvol 150 00:08:41.131 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=eec1a7cd-c231-4a5f-af90-a658dc7c97de 00:08:41.131 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.131 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:41.390 [2024-12-06 16:35:29.902601] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:41.390 [2024-12-06 16:35:29.902642] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:41.390 true 00:08:41.390 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:41.390 16:35:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:41.390 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:41.390 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.649 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eec1a7cd-c231-4a5f-af90-a658dc7c97de 00:08:41.908 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:41.908 [2024-12-06 16:35:30.528463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.908 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2018589 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2018589 /var/tmp/bdevperf.sock 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2018589 ']' 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:42.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.167 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:42.167 [2024-12-06 16:35:30.713138] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:42.167 [2024-12-06 16:35:30.713177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018589 ] 00:08:42.167 [2024-12-06 16:35:30.781484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.167 [2024-12-06 16:35:30.800255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.425 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.425 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:42.425 16:35:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.684 Nvme0n1 00:08:42.684 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:42.684 [ 00:08:42.684 { 00:08:42.684 "name": "Nvme0n1", 00:08:42.684 "aliases": [ 00:08:42.684 "eec1a7cd-c231-4a5f-af90-a658dc7c97de" 00:08:42.684 ], 00:08:42.684 "product_name": "NVMe disk", 00:08:42.684 "block_size": 4096, 00:08:42.684 "num_blocks": 38912, 00:08:42.684 "uuid": "eec1a7cd-c231-4a5f-af90-a658dc7c97de", 00:08:42.684 "numa_id": 0, 00:08:42.684 "assigned_rate_limits": { 00:08:42.684 "rw_ios_per_sec": 0, 00:08:42.684 "rw_mbytes_per_sec": 0, 00:08:42.684 "r_mbytes_per_sec": 0, 00:08:42.684 "w_mbytes_per_sec": 0 00:08:42.684 }, 00:08:42.684 "claimed": false, 00:08:42.684 "zoned": false, 00:08:42.684 "supported_io_types": { 00:08:42.684 "read": true, 00:08:42.684 "write": true, 00:08:42.684 "unmap": true, 00:08:42.684 "flush": true, 00:08:42.684 "reset": true, 00:08:42.684 "nvme_admin": true, 00:08:42.684 "nvme_io": true, 00:08:42.684 "nvme_io_md": false, 00:08:42.684 "write_zeroes": true, 00:08:42.684 "zcopy": false, 00:08:42.684 "get_zone_info": false, 00:08:42.684 "zone_management": false, 00:08:42.684 "zone_append": false, 00:08:42.684 "compare": true, 00:08:42.684 "compare_and_write": true, 00:08:42.684 "abort": true, 00:08:42.684 "seek_hole": false, 00:08:42.684 "seek_data": false, 00:08:42.684 "copy": true, 00:08:42.684 "nvme_iov_md": false 00:08:42.684 }, 00:08:42.684 "memory_domains": [ 00:08:42.684 { 00:08:42.684 "dma_device_id": "system", 00:08:42.684 "dma_device_type": 1 00:08:42.684 } 00:08:42.684 ], 00:08:42.684 "driver_specific": { 00:08:42.684 "nvme": [ 00:08:42.684 { 00:08:42.684 "trid": { 00:08:42.684 "trtype": "TCP", 00:08:42.684 "adrfam": "IPv4", 00:08:42.684 "traddr": "10.0.0.2", 00:08:42.684 "trsvcid": "4420", 00:08:42.684 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:42.684 }, 00:08:42.684 "ctrlr_data": { 00:08:42.684 "cntlid": 1, 00:08:42.684 "vendor_id": "0x8086", 00:08:42.684 "model_number": "SPDK bdev Controller", 00:08:42.684 "serial_number": "SPDK0", 00:08:42.684 "firmware_revision": "25.01", 00:08:42.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.684 "oacs": { 00:08:42.684 "security": 0, 00:08:42.684 "format": 0, 00:08:42.684 "firmware": 0, 00:08:42.684 "ns_manage": 0 00:08:42.684 }, 00:08:42.684 "multi_ctrlr": true, 00:08:42.684 "ana_reporting": false 00:08:42.684 }, 00:08:42.684 "vs": { 00:08:42.684 "nvme_version": "1.3" 00:08:42.684 }, 00:08:42.684 "ns_data": { 00:08:42.684 "id": 1, 00:08:42.684 "can_share": true 00:08:42.684 } 00:08:42.684 } 00:08:42.684 ], 00:08:42.684 "mp_policy": "active_passive" 00:08:42.684 } 00:08:42.684 } 00:08:42.684 ] 00:08:42.684 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2018901 00:08:42.684 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:42.684 16:35:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:42.943 Running I/O for 10 seconds... 00:08:43.880 Latency(us) 00:08:43.880 [2024-12-06T15:35:32.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.880 Nvme0n1 : 1.00 25226.00 98.54 0.00 0.00 0.00 0.00 0.00 00:08:43.880 [2024-12-06T15:35:32.573Z] =================================================================================================================== 00:08:43.880 [2024-12-06T15:35:32.573Z] Total : 25226.00 98.54 0.00 0.00 0.00 0.00 0.00 00:08:43.880 00:08:44.816 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:44.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.816 Nvme0n1 : 2.00 25424.50 99.31 0.00 0.00 0.00 0.00 0.00 00:08:44.816 [2024-12-06T15:35:33.509Z] =================================================================================================================== 00:08:44.816 [2024-12-06T15:35:33.509Z] Total : 25424.50 99.31 0.00 0.00 0.00 0.00 0.00 00:08:44.816 00:08:44.816 true 00:08:44.816 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:44.816 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.076 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.076 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.076 16:35:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2018901 00:08:46.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.013 Nvme0n1 : 3.00 25492.67 99.58 0.00 0.00 0.00 0.00 0.00 00:08:46.013 [2024-12-06T15:35:34.706Z] =================================================================================================================== 00:08:46.013 [2024-12-06T15:35:34.706Z] Total : 25492.67 99.58 0.00 0.00 0.00 0.00 0.00 00:08:46.013 00:08:46.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.978 Nvme0n1 : 4.00 25551.25 99.81 0.00 0.00 0.00 0.00 0.00 00:08:46.978 [2024-12-06T15:35:35.671Z] =================================================================================================================== 00:08:46.978 [2024-12-06T15:35:35.671Z] Total : 25551.25 99.81 0.00 0.00 0.00 0.00 0.00 00:08:46.978 00:08:47.915 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.915 Nvme0n1 : 5.00 25599.00 100.00 0.00 0.00 0.00 0.00 0.00 00:08:47.915 [2024-12-06T15:35:36.608Z] =================================================================================================================== 00:08:47.915 [2024-12-06T15:35:36.608Z] Total : 25599.00 100.00 0.00 0.00 0.00 0.00 0.00 00:08:47.915 00:08:48.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.851 Nvme0n1 : 6.00 25620.33 100.08 0.00 0.00 0.00 0.00 0.00 00:08:48.851 [2024-12-06T15:35:37.544Z] =================================================================================================================== 00:08:48.851 [2024-12-06T15:35:37.544Z] Total : 25620.33 100.08 0.00 0.00 0.00 0.00 0.00 00:08:48.851 00:08:49.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.787 Nvme0n1 : 7.00 25644.86 100.18 0.00 0.00 0.00 0.00 0.00 00:08:49.787 [2024-12-06T15:35:38.480Z] =================================================================================================================== 00:08:49.787 [2024-12-06T15:35:38.480Z] Total : 25644.86 100.18 0.00 0.00 0.00 0.00 0.00 00:08:49.787 00:08:50.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.723 Nvme0n1 : 8.00 25667.25 100.26 0.00 0.00 0.00 0.00 0.00 00:08:50.723 [2024-12-06T15:35:39.416Z] =================================================================================================================== 00:08:50.723 [2024-12-06T15:35:39.416Z] Total : 25667.25 100.26 0.00 0.00 0.00 0.00 0.00 00:08:50.723 00:08:52.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.100 Nvme0n1 : 9.00 25677.22 100.30 0.00 0.00 0.00 0.00 0.00 00:08:52.100 [2024-12-06T15:35:40.793Z] =================================================================================================================== 00:08:52.100 [2024-12-06T15:35:40.793Z] Total : 25677.22 100.30 0.00 0.00 0.00 0.00 0.00 00:08:52.100 00:08:53.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.036 Nvme0n1 : 10.00 25688.60 100.35 0.00 0.00 0.00 0.00 0.00 00:08:53.036 [2024-12-06T15:35:41.729Z] =================================================================================================================== 00:08:53.036 [2024-12-06T15:35:41.729Z] Total : 25688.60 100.35 0.00 0.00 0.00 0.00 0.00 00:08:53.036 00:08:53.036 00:08:53.036 Latency(us) 00:08:53.036 [2024-12-06T15:35:41.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.036 Nvme0n1 : 10.00 25690.80 100.35 0.00 0.00 4979.24 2375.68 15073.28 00:08:53.036 [2024-12-06T15:35:41.729Z] =================================================================================================================== 00:08:53.036 [2024-12-06T15:35:41.729Z] Total : 25690.80 100.35 0.00 0.00 4979.24 2375.68 15073.28 00:08:53.036 { 00:08:53.036 "results": [ 00:08:53.036 { 00:08:53.036 "job": "Nvme0n1", 00:08:53.036 "core_mask": "0x2", 00:08:53.036 "workload": "randwrite", 00:08:53.036 "status": "finished", 00:08:53.036 "queue_depth": 128, 00:08:53.036 "io_size": 4096, 00:08:53.036 "runtime": 10.004126, 00:08:53.036 "iops": 25690.79997592993, 00:08:53.036 "mibps": 100.35468740597629, 00:08:53.036 "io_failed": 0, 00:08:53.036 "io_timeout": 0, 00:08:53.036 "avg_latency_us": 4979.237737192007, 00:08:53.036 "min_latency_us": 2375.68, 00:08:53.036 "max_latency_us": 15073.28 00:08:53.036 } 00:08:53.036 ], 00:08:53.036 "core_count": 1 00:08:53.036 } 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2018589 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2018589 ']' 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2018589 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2018589 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2018589' 00:08:53.036 killing process with pid 2018589 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2018589 00:08:53.036 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.036 00:08:53.036 Latency(us) 00:08:53.036 [2024-12-06T15:35:41.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.036 [2024-12-06T15:35:41.729Z] =================================================================================================================== 00:08:53.036 [2024-12-06T15:35:41.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2018589 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.036 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:53.295 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:53.295 16:35:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:53.553 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:53.553 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:53.553 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.553 [2024-12-06 16:35:42.174497] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:53.554 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:53.813 request: 00:08:53.813 { 00:08:53.813 "uuid": "4fe74609-28b5-473e-a8c8-ab81bf744455", 00:08:53.813 "method": "bdev_lvol_get_lvstores", 00:08:53.813 "req_id": 1 00:08:53.813 } 00:08:53.813 Got JSON-RPC error response 00:08:53.813 response: 00:08:53.813 { 00:08:53.813 "code": -19, 00:08:53.813 "message": "No such device" 00:08:53.813 } 00:08:53.813 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:53.813 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:53.813 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:53.813 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:53.813 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:53.813 aio_bdev 00:08:54.072 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev eec1a7cd-c231-4a5f-af90-a658dc7c97de 00:08:54.073 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=eec1a7cd-c231-4a5f-af90-a658dc7c97de 00:08:54.073 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.073 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:54.073 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.073 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.073 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.073 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b eec1a7cd-c231-4a5f-af90-a658dc7c97de -t 2000 00:08:54.330 [ 00:08:54.330 { 00:08:54.330 "name": "eec1a7cd-c231-4a5f-af90-a658dc7c97de", 00:08:54.330 "aliases": [ 00:08:54.330 "lvs/lvol" 00:08:54.330 ], 00:08:54.330 "product_name": "Logical Volume", 00:08:54.330 "block_size": 4096, 00:08:54.330 "num_blocks": 38912, 00:08:54.330 "uuid": "eec1a7cd-c231-4a5f-af90-a658dc7c97de", 00:08:54.330 "assigned_rate_limits": { 00:08:54.330 "rw_ios_per_sec": 0, 00:08:54.330 "rw_mbytes_per_sec": 0, 00:08:54.330 "r_mbytes_per_sec": 0, 00:08:54.330 "w_mbytes_per_sec": 0 00:08:54.330 }, 00:08:54.330 "claimed": false, 00:08:54.330 "zoned": false, 00:08:54.330 "supported_io_types": { 00:08:54.330 "read": true, 00:08:54.330 "write": true, 00:08:54.330 "unmap": true, 00:08:54.330 "flush": false, 00:08:54.330 "reset": true, 00:08:54.330 "nvme_admin": false, 00:08:54.330 "nvme_io": false, 00:08:54.330 "nvme_io_md": false, 00:08:54.330 "write_zeroes": true, 00:08:54.330 "zcopy": false, 00:08:54.330 "get_zone_info": false, 00:08:54.330 "zone_management": false, 00:08:54.330 "zone_append": false, 00:08:54.330 "compare": false, 00:08:54.330 "compare_and_write": false, 00:08:54.330 "abort": false, 00:08:54.330 "seek_hole": true, 00:08:54.330 "seek_data": true, 00:08:54.330 "copy": false, 00:08:54.330 "nvme_iov_md": false 00:08:54.330 }, 00:08:54.330 "driver_specific": { 00:08:54.330 "lvol": { 00:08:54.330 "lvol_store_uuid": "4fe74609-28b5-473e-a8c8-ab81bf744455", 00:08:54.330 "base_bdev": "aio_bdev", 00:08:54.330 "thin_provision": false, 00:08:54.330 "num_allocated_clusters": 38, 00:08:54.330 "snapshot": false, 00:08:54.330 "clone": false, 00:08:54.330 "esnap_clone": false 00:08:54.330 } 00:08:54.330 } 00:08:54.330 } 00:08:54.330 ] 00:08:54.330 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:54.330 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:54.330 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:54.330 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:54.330 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:54.330 16:35:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:54.589 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:54.589 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete eec1a7cd-c231-4a5f-af90-a658dc7c97de 00:08:54.847 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4fe74609-28b5-473e-a8c8-ab81bf744455 00:08:54.847 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.106 00:08:55.106 real 0m14.508s 00:08:55.106 user 0m14.064s 00:08:55.106 sys 0m1.172s 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:55.106 ************************************ 00:08:55.106 END TEST lvs_grow_clean 00:08:55.106 ************************************ 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.106 ************************************ 00:08:55.106 START TEST lvs_grow_dirty 00:08:55.106 ************************************ 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.106 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.366 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:55.366 16:35:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:55.366 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b252813a-626b-4268-885e-183af627e40d 00:08:55.366 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:08:55.366 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:55.624 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:55.624 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:55.624 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b252813a-626b-4268-885e-183af627e40d lvol 150 00:08:55.624 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e57eed5-fb4e-4652-8c3b-554274adcfcc 00:08:55.624 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:55.624 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:55.882 [2024-12-06 16:35:44.451546] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:55.882 [2024-12-06 16:35:44.451587] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:55.882 true 00:08:55.882 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:08:55.882 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:56.141 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:56.141 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:56.141 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e57eed5-fb4e-4652-8c3b-554274adcfcc 00:08:56.399 16:35:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:56.399 [2024-12-06 16:35:45.057351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.399 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2021990 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2021990 /var/tmp/bdevperf.sock 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2021990 ']' 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.657 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:56.657 [2024-12-06 16:35:45.262351] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:08:56.657 [2024-12-06 16:35:45.262400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021990 ] 00:08:56.657 [2024-12-06 16:35:45.326320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.657 [2024-12-06 16:35:45.342603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.914 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.914 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:56.914 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:57.172 Nvme0n1 00:08:57.172 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:57.429 [ 00:08:57.429 { 00:08:57.429 "name": "Nvme0n1", 00:08:57.429 "aliases": [ 00:08:57.429 "7e57eed5-fb4e-4652-8c3b-554274adcfcc" 00:08:57.429 ], 00:08:57.429 "product_name": "NVMe disk", 00:08:57.429 "block_size": 4096, 00:08:57.429 "num_blocks": 38912, 00:08:57.429 "uuid": "7e57eed5-fb4e-4652-8c3b-554274adcfcc", 00:08:57.429 "numa_id": 0, 00:08:57.429 "assigned_rate_limits": { 00:08:57.429 "rw_ios_per_sec": 0, 00:08:57.429 "rw_mbytes_per_sec": 0, 00:08:57.429 "r_mbytes_per_sec": 0, 00:08:57.429 "w_mbytes_per_sec": 0 00:08:57.429 }, 00:08:57.429 "claimed": false, 00:08:57.429 "zoned": false, 00:08:57.429 "supported_io_types": { 00:08:57.429 "read": true, 00:08:57.429 "write": true, 00:08:57.429 "unmap": true, 00:08:57.429 "flush": true, 00:08:57.429 "reset": true, 00:08:57.429 "nvme_admin": true, 00:08:57.429 "nvme_io": true, 00:08:57.429 "nvme_io_md": false, 00:08:57.429 "write_zeroes": true, 00:08:57.429 "zcopy": false, 00:08:57.429 "get_zone_info": false, 00:08:57.429 "zone_management": false, 00:08:57.429 "zone_append": false, 00:08:57.429 "compare": true, 00:08:57.429 "compare_and_write": true, 00:08:57.429 "abort": true, 00:08:57.429 "seek_hole": false, 00:08:57.429 "seek_data": false, 00:08:57.429 "copy": true, 00:08:57.429 "nvme_iov_md": false 00:08:57.429 }, 00:08:57.429 "memory_domains": [ 00:08:57.429 { 00:08:57.429 "dma_device_id": "system", 00:08:57.429 "dma_device_type": 1 00:08:57.429 } 00:08:57.429 ], 00:08:57.429 "driver_specific": { 00:08:57.429 "nvme": [ 00:08:57.429 { 00:08:57.429 "trid": { 00:08:57.429 "trtype": "TCP", 00:08:57.429 "adrfam": "IPv4", 00:08:57.429 "traddr": "10.0.0.2", 00:08:57.429 "trsvcid": "4420", 00:08:57.429 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:57.429 }, 00:08:57.429 "ctrlr_data": { 00:08:57.429 "cntlid": 1, 00:08:57.429 "vendor_id": "0x8086", 00:08:57.429 "model_number": "SPDK bdev Controller", 00:08:57.429 "serial_number": "SPDK0", 00:08:57.429 "firmware_revision": "25.01", 00:08:57.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:57.429 "oacs": { 00:08:57.429 "security": 0, 00:08:57.429 "format": 0, 00:08:57.429 "firmware": 0, 00:08:57.429 "ns_manage": 0 00:08:57.429 }, 00:08:57.429 "multi_ctrlr": true, 00:08:57.429 "ana_reporting": false 00:08:57.429 }, 00:08:57.429 "vs": { 00:08:57.429 "nvme_version": "1.3" 00:08:57.429 }, 00:08:57.429 "ns_data": { 00:08:57.429 "id": 1, 00:08:57.429 "can_share": true 00:08:57.429 } 00:08:57.429 } 00:08:57.429 ], 00:08:57.429 "mp_policy": "active_passive" 00:08:57.429 } 00:08:57.429 } 00:08:57.429 ] 00:08:57.429 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2022013 00:08:57.429 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:57.429 16:35:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:57.429 Running I/O for 10 seconds... 00:08:58.367 Latency(us) 00:08:58.367 [2024-12-06T15:35:47.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.367 Nvme0n1 : 1.00 25158.00 98.27 0.00 0.00 0.00 0.00 0.00 00:08:58.367 [2024-12-06T15:35:47.060Z] =================================================================================================================== 00:08:58.367 [2024-12-06T15:35:47.060Z] Total : 25158.00 98.27 0.00 0.00 0.00 0.00 0.00 00:08:58.367 00:08:59.305 16:35:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b252813a-626b-4268-885e-183af627e40d 00:08:59.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.305 Nvme0n1 : 2.00 25378.50 99.13 0.00 0.00 0.00 0.00 0.00 00:08:59.305 [2024-12-06T15:35:47.998Z] =================================================================================================================== 00:08:59.305 [2024-12-06T15:35:47.998Z] Total : 25378.50 99.13 0.00 0.00 0.00 0.00 0.00 00:08:59.305 00:08:59.563 true 00:08:59.563 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:08:59.563 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:59.563 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:59.563 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:59.563 16:35:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2022013 00:09:00.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.503 Nvme0n1 : 3.00 25452.33 99.42 0.00 0.00 0.00 0.00 0.00 00:09:00.503 [2024-12-06T15:35:49.196Z] =================================================================================================================== 00:09:00.503 [2024-12-06T15:35:49.196Z] Total : 25452.33 99.42 0.00 0.00 0.00 0.00 0.00 00:09:00.503 00:09:01.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.519 Nvme0n1 : 4.00 25521.50 99.69 0.00 0.00 0.00 0.00 0.00 00:09:01.519 [2024-12-06T15:35:50.212Z] =================================================================================================================== 00:09:01.519 [2024-12-06T15:35:50.212Z] Total : 25521.50 99.69 0.00 0.00 0.00 0.00 0.00 00:09:01.519 00:09:02.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.458 Nvme0n1 : 5.00 25556.40 99.83 0.00 0.00 0.00 0.00 0.00 00:09:02.458 [2024-12-06T15:35:51.151Z] =================================================================================================================== 00:09:02.458 [2024-12-06T15:35:51.151Z] Total : 25556.40 99.83 0.00 0.00 0.00 0.00 0.00 00:09:02.458 00:09:03.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.396 Nvme0n1 : 6.00 25589.67 99.96 0.00 0.00 0.00 0.00 0.00 00:09:03.396 [2024-12-06T15:35:52.089Z] =================================================================================================================== 00:09:03.396 [2024-12-06T15:35:52.089Z] Total : 25589.67 99.96 0.00 0.00 0.00 0.00 0.00 00:09:03.396 00:09:04.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.345 Nvme0n1 : 7.00 25618.00 100.07 0.00 0.00 0.00 0.00 0.00 00:09:04.345 [2024-12-06T15:35:53.038Z] =================================================================================================================== 00:09:04.345 [2024-12-06T15:35:53.038Z] Total : 25618.00 100.07 0.00 0.00 0.00 0.00 0.00 00:09:04.345 00:09:05.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.724 Nvme0n1 : 8.00 25639.88 100.16 0.00 0.00 0.00 0.00 0.00 00:09:05.724 [2024-12-06T15:35:54.417Z] =================================================================================================================== 00:09:05.724 [2024-12-06T15:35:54.417Z] Total : 25639.88 100.16 0.00 0.00 0.00 0.00 0.00 00:09:05.724 00:09:06.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.660 Nvme0n1 : 9.00 25656.11 100.22 0.00 0.00 0.00 0.00 0.00 00:09:06.660 [2024-12-06T15:35:55.353Z] =================================================================================================================== 00:09:06.660 [2024-12-06T15:35:55.353Z] Total : 25656.11 100.22 0.00 0.00 0.00 0.00 0.00 00:09:06.660 00:09:07.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.596 Nvme0n1 : 10.00 25672.90 100.28 0.00 0.00 0.00 0.00 0.00 00:09:07.596 [2024-12-06T15:35:56.289Z] =================================================================================================================== 00:09:07.596 [2024-12-06T15:35:56.289Z] Total : 25672.90 100.28 0.00 0.00 0.00 0.00 0.00 00:09:07.596 00:09:07.596 00:09:07.596 Latency(us) 00:09:07.596 [2024-12-06T15:35:56.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.596 Nvme0n1 : 10.00 25674.03 100.29 0.00 0.00 4982.42 3003.73 15837.87 00:09:07.596 [2024-12-06T15:35:56.290Z] =================================================================================================================== 00:09:07.597 [2024-12-06T15:35:56.290Z] Total : 25674.03 100.29 0.00 0.00 4982.42 3003.73 15837.87 00:09:07.597 { 00:09:07.597 "results": [ 00:09:07.597 { 00:09:07.597 "job": "Nvme0n1", 00:09:07.597 "core_mask": "0x2", 00:09:07.597 "workload": "randwrite", 00:09:07.597 "status": "finished", 00:09:07.597 "queue_depth": 128, 00:09:07.597 "io_size": 4096, 00:09:07.597 "runtime": 10.00322, 00:09:07.597 "iops": 25674.032961386434, 00:09:07.597 "mibps": 100.28919125541576, 00:09:07.597 "io_failed": 0, 00:09:07.597 "io_timeout": 0, 00:09:07.597 "avg_latency_us": 4982.415698697806, 00:09:07.597 "min_latency_us": 3003.733333333333, 00:09:07.597 "max_latency_us": 15837.866666666667 00:09:07.597 } 00:09:07.597 ], 00:09:07.597 "core_count": 1 00:09:07.597 } 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2021990 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2021990 ']' 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2021990 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2021990 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2021990' 00:09:07.597 killing process with pid 2021990 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2021990 00:09:07.597 Received shutdown signal, test time was about 10.000000 seconds 00:09:07.597 00:09:07.597 Latency(us) 00:09:07.597 [2024-12-06T15:35:56.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.597 [2024-12-06T15:35:56.290Z] =================================================================================================================== 00:09:07.597 [2024-12-06T15:35:56.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2021990 00:09:07.597 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.855 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.855 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:07.855 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2018204 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2018204 00:09:08.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2018204 Killed "${NVMF_APP[@]}" "$@" 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2024569 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2024569 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2024569 ']' 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:08.115 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:08.115 [2024-12-06 16:35:56.748938] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:08.115 [2024-12-06 16:35:56.748994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.374 [2024-12-06 16:35:56.818800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.374 [2024-12-06 16:35:56.834411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.374 [2024-12-06 16:35:56.834442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.374 [2024-12-06 16:35:56.834447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.374 [2024-12-06 16:35:56.834452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.374 [2024-12-06 16:35:56.834456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.374 [2024-12-06 16:35:56.834931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.374 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.374 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:08.374 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.374 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.374 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:08.374 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.374 16:35:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:08.633 [2024-12-06 16:35:57.078590] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:08.633 [2024-12-06 16:35:57.078668] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:08.633 [2024-12-06 16:35:57.078691] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7e57eed5-fb4e-4652-8c3b-554274adcfcc 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7e57eed5-fb4e-4652-8c3b-554274adcfcc 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:08.633 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e57eed5-fb4e-4652-8c3b-554274adcfcc -t 2000 00:09:08.892 [ 00:09:08.892 { 00:09:08.892 "name": "7e57eed5-fb4e-4652-8c3b-554274adcfcc", 00:09:08.892 "aliases": [ 00:09:08.892 "lvs/lvol" 00:09:08.892 ], 00:09:08.892 "product_name": "Logical Volume", 00:09:08.892 "block_size": 4096, 00:09:08.892 "num_blocks": 38912, 00:09:08.892 "uuid": "7e57eed5-fb4e-4652-8c3b-554274adcfcc", 00:09:08.892 "assigned_rate_limits": { 00:09:08.892 "rw_ios_per_sec": 0, 00:09:08.892 "rw_mbytes_per_sec": 0, 00:09:08.892 "r_mbytes_per_sec": 0, 00:09:08.892 "w_mbytes_per_sec": 0 00:09:08.892 }, 00:09:08.892 "claimed": false, 00:09:08.892 "zoned": false, 00:09:08.892 "supported_io_types": { 00:09:08.892 "read": true, 00:09:08.892 "write": true, 00:09:08.892 "unmap": true, 00:09:08.892 "flush": false, 00:09:08.892 "reset": true, 00:09:08.892 "nvme_admin": false, 00:09:08.893 "nvme_io": false, 00:09:08.893 "nvme_io_md": false, 00:09:08.893 "write_zeroes": true, 00:09:08.893 "zcopy": false, 00:09:08.893 "get_zone_info": false, 00:09:08.893 "zone_management": false, 00:09:08.893 "zone_append": false, 00:09:08.893 "compare": false, 00:09:08.893 "compare_and_write": false, 00:09:08.893 "abort": false, 00:09:08.893 "seek_hole": true, 00:09:08.893 "seek_data": true, 00:09:08.893 "copy": false, 00:09:08.893 "nvme_iov_md": false 00:09:08.893 }, 00:09:08.893 "driver_specific": { 00:09:08.893 "lvol": { 00:09:08.893 "lvol_store_uuid": "b252813a-626b-4268-885e-183af627e40d", 00:09:08.893 "base_bdev": "aio_bdev", 00:09:08.893 "thin_provision": false, 00:09:08.893 "num_allocated_clusters": 38, 00:09:08.893 "snapshot": false, 00:09:08.893 "clone": false, 00:09:08.893 "esnap_clone": false 00:09:08.893 } 00:09:08.893 } 00:09:08.893 } 00:09:08.893 ] 00:09:08.893 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:08.893 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:08.893 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:08.893 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:08.893 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:08.893 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:09.151 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:09.151 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:09.410 [2024-12-06 16:35:57.850980] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:09.410 16:35:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:09.410 request: 00:09:09.410 { 00:09:09.410 "uuid": "b252813a-626b-4268-885e-183af627e40d", 00:09:09.410 "method": "bdev_lvol_get_lvstores", 00:09:09.410 "req_id": 1 00:09:09.410 } 00:09:09.410 Got JSON-RPC error response 00:09:09.410 response: 00:09:09.410 { 00:09:09.410 "code": -19, 00:09:09.410 "message": "No such device" 00:09:09.410 } 00:09:09.410 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:09.410 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:09.410 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:09.410 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:09.410 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.668 aio_bdev 00:09:09.668 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7e57eed5-fb4e-4652-8c3b-554274adcfcc 00:09:09.668 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7e57eed5-fb4e-4652-8c3b-554274adcfcc 00:09:09.668 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.668 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:09.668 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.668 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.668 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:09.927 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7e57eed5-fb4e-4652-8c3b-554274adcfcc -t 2000 00:09:09.927 [ 00:09:09.927 { 00:09:09.927 "name": "7e57eed5-fb4e-4652-8c3b-554274adcfcc", 00:09:09.927 "aliases": [ 00:09:09.927 "lvs/lvol" 00:09:09.927 ], 00:09:09.927 "product_name": "Logical Volume", 00:09:09.927 "block_size": 4096, 00:09:09.927 "num_blocks": 38912, 00:09:09.927 "uuid": "7e57eed5-fb4e-4652-8c3b-554274adcfcc", 00:09:09.927 "assigned_rate_limits": { 00:09:09.927 "rw_ios_per_sec": 0, 00:09:09.927 "rw_mbytes_per_sec": 0, 00:09:09.927 "r_mbytes_per_sec": 0, 00:09:09.927 "w_mbytes_per_sec": 0 00:09:09.927 }, 00:09:09.927 "claimed": false, 00:09:09.927 "zoned": false, 00:09:09.927 "supported_io_types": { 00:09:09.927 "read": true, 00:09:09.927 "write": true, 00:09:09.927 "unmap": true, 00:09:09.927 "flush": false, 00:09:09.927 "reset": true, 00:09:09.927 "nvme_admin": false, 00:09:09.927 "nvme_io": false, 00:09:09.927 "nvme_io_md": false, 00:09:09.927 "write_zeroes": true, 00:09:09.927 "zcopy": false, 00:09:09.927 "get_zone_info": false, 00:09:09.927 "zone_management": false, 00:09:09.927 "zone_append": false, 00:09:09.927 "compare": false, 00:09:09.927 "compare_and_write": false, 00:09:09.927 "abort": false, 00:09:09.927 "seek_hole": true, 00:09:09.927 "seek_data": true, 00:09:09.927 "copy": false, 00:09:09.928 "nvme_iov_md": false 00:09:09.928 }, 00:09:09.928 "driver_specific": { 00:09:09.928 "lvol": { 00:09:09.928 "lvol_store_uuid": "b252813a-626b-4268-885e-183af627e40d", 00:09:09.928 "base_bdev": "aio_bdev", 00:09:09.928 "thin_provision": false, 00:09:09.928 "num_allocated_clusters": 38, 00:09:09.928 "snapshot": false, 00:09:09.928 "clone": false, 00:09:09.928 "esnap_clone": false 00:09:09.928 } 00:09:09.928 } 00:09:09.928 } 00:09:09.928 ] 00:09:09.928 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:09.928 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:09.928 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:10.185 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:10.185 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b252813a-626b-4268-885e-183af627e40d 00:09:10.185 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:10.185 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:10.185 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7e57eed5-fb4e-4652-8c3b-554274adcfcc 00:09:10.442 16:35:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b252813a-626b-4268-885e-183af627e40d 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:10.701 00:09:10.701 real 0m15.646s 00:09:10.701 user 0m41.335s 00:09:10.701 sys 0m2.699s 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:10.701 ************************************ 00:09:10.701 END TEST lvs_grow_dirty 00:09:10.701 ************************************ 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:10.701 nvmf_trace.0 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.701 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.701 rmmod nvme_tcp 00:09:10.960 rmmod nvme_fabrics 00:09:10.960 rmmod nvme_keyring 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2024569 ']' 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2024569 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2024569 ']' 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2024569 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2024569 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2024569' 00:09:10.960 killing process with pid 2024569 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2024569 00:09:10.960 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2024569 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.961 16:35:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:13.498 00:09:13.498 real 0m38.491s 00:09:13.498 user 0m59.906s 00:09:13.498 sys 0m8.235s 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.498 ************************************ 00:09:13.498 END TEST nvmf_lvs_grow 00:09:13.498 ************************************ 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.498 ************************************ 00:09:13.498 START TEST nvmf_bdev_io_wait 00:09:13.498 ************************************ 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:13.498 * Looking for test storage... 00:09:13.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:13.498 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.499 --rc genhtml_branch_coverage=1 00:09:13.499 --rc genhtml_function_coverage=1 00:09:13.499 --rc genhtml_legend=1 00:09:13.499 --rc geninfo_all_blocks=1 00:09:13.499 --rc geninfo_unexecuted_blocks=1 00:09:13.499 00:09:13.499 ' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.499 --rc genhtml_branch_coverage=1 00:09:13.499 --rc genhtml_function_coverage=1 00:09:13.499 --rc genhtml_legend=1 00:09:13.499 --rc geninfo_all_blocks=1 00:09:13.499 --rc geninfo_unexecuted_blocks=1 00:09:13.499 00:09:13.499 ' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.499 --rc genhtml_branch_coverage=1 00:09:13.499 --rc genhtml_function_coverage=1 00:09:13.499 --rc genhtml_legend=1 00:09:13.499 --rc geninfo_all_blocks=1 00:09:13.499 --rc geninfo_unexecuted_blocks=1 00:09:13.499 00:09:13.499 ' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:13.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.499 --rc genhtml_branch_coverage=1 00:09:13.499 --rc genhtml_function_coverage=1 00:09:13.499 --rc genhtml_legend=1 00:09:13.499 --rc geninfo_all_blocks=1 00:09:13.499 --rc geninfo_unexecuted_blocks=1 00:09:13.499 00:09:13.499 ' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.499 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.500 16:36:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:18.772 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:18.772 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:18.772 Found net devices under 0000:31:00.0: cvl_0_0 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:18.772 Found net devices under 0000:31:00.1: cvl_0_1 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.772 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:09:18.773 00:09:18.773 --- 10.0.0.2 ping statistics --- 00:09:18.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.773 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:09:18.773 00:09:18.773 --- 10.0.0.1 ping statistics --- 00:09:18.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.773 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2029679 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2029679 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2029679 ']' 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.773 16:36:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:18.773 [2024-12-06 16:36:07.339672] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:18.773 [2024-12-06 16:36:07.339744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.773 [2024-12-06 16:36:07.433109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.773 [2024-12-06 16:36:07.463357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.773 [2024-12-06 16:36:07.463410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.773 [2024-12-06 16:36:07.463419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.773 [2024-12-06 16:36:07.463426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.773 [2024-12-06 16:36:07.463433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.033 [2024-12-06 16:36:07.465333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.033 [2024-12-06 16:36:07.465490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.033 [2024-12-06 16:36:07.465786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.033 [2024-12-06 16:36:07.465789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 [2024-12-06 16:36:08.241861] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 Malloc0 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.601 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.601 [2024-12-06 16:36:08.290544] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.861 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.861 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2030041 00:09:19.861 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2030045 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2030046 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:19.862 { 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme$subsystem", 00:09:19.862 "trtype": "$TEST_TRANSPORT", 00:09:19.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "$NVMF_PORT", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.862 "hdgst": ${hdgst:-false}, 00:09:19.862 "ddgst": ${ddgst:-false} 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 } 00:09:19.862 EOF 00:09:19.862 )") 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2030049 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:19.862 { 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme$subsystem", 00:09:19.862 "trtype": "$TEST_TRANSPORT", 00:09:19.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "$NVMF_PORT", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.862 "hdgst": ${hdgst:-false}, 00:09:19.862 "ddgst": ${ddgst:-false} 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 } 00:09:19.862 EOF 00:09:19.862 )") 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:19.862 { 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme$subsystem", 00:09:19.862 "trtype": "$TEST_TRANSPORT", 00:09:19.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "$NVMF_PORT", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.862 "hdgst": ${hdgst:-false}, 00:09:19.862 "ddgst": ${ddgst:-false} 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 } 00:09:19.862 EOF 00:09:19.862 )") 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2030041 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:19.862 { 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme$subsystem", 00:09:19.862 "trtype": "$TEST_TRANSPORT", 00:09:19.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "$NVMF_PORT", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.862 "hdgst": ${hdgst:-false}, 00:09:19.862 "ddgst": ${ddgst:-false} 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 } 00:09:19.862 EOF 00:09:19.862 )") 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme1", 00:09:19.862 "trtype": "tcp", 00:09:19.862 "traddr": "10.0.0.2", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "4420", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.862 "hdgst": false, 00:09:19.862 "ddgst": false 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 }' 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme1", 00:09:19.862 "trtype": "tcp", 00:09:19.862 "traddr": "10.0.0.2", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "4420", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.862 "hdgst": false, 00:09:19.862 "ddgst": false 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 }' 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme1", 00:09:19.862 "trtype": "tcp", 00:09:19.862 "traddr": "10.0.0.2", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "4420", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.862 "hdgst": false, 00:09:19.862 "ddgst": false 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 }' 00:09:19.862 16:36:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:19.862 "params": { 00:09:19.862 "name": "Nvme1", 00:09:19.862 "trtype": "tcp", 00:09:19.862 "traddr": "10.0.0.2", 00:09:19.862 "adrfam": "ipv4", 00:09:19.862 "trsvcid": "4420", 00:09:19.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.862 "hdgst": false, 00:09:19.862 "ddgst": false 00:09:19.862 }, 00:09:19.862 "method": "bdev_nvme_attach_controller" 00:09:19.862 }' 00:09:19.862 [2024-12-06 16:36:08.332173] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:19.862 [2024-12-06 16:36:08.332243] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:19.862 [2024-12-06 16:36:08.334273] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:19.862 [2024-12-06 16:36:08.334336] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:19.862 [2024-12-06 16:36:08.334706] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:19.863 [2024-12-06 16:36:08.334773] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:19.863 [2024-12-06 16:36:08.335419] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:19.863 [2024-12-06 16:36:08.335481] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:19.863 [2024-12-06 16:36:08.537981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.863 [2024-12-06 16:36:08.550615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:20.123 [2024-12-06 16:36:08.601484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.123 [2024-12-06 16:36:08.613905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:20.123 [2024-12-06 16:36:08.639524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.123 [2024-12-06 16:36:08.650957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:20.123 [2024-12-06 16:36:08.691917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.123 [2024-12-06 16:36:08.703715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:20.382 Running I/O for 1 seconds... 00:09:20.382 Running I/O for 1 seconds... 00:09:20.382 Running I/O for 1 seconds... 00:09:20.382 Running I/O for 1 seconds... 00:09:21.319 181760.00 IOPS, 710.00 MiB/s [2024-12-06T15:36:10.012Z] 8407.00 IOPS, 32.84 MiB/s 00:09:21.319 Latency(us) 00:09:21.319 [2024-12-06T15:36:10.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.319 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:21.319 Nvme1n1 : 1.00 181398.85 708.59 0.00 0.00 701.38 298.67 1966.08 00:09:21.319 [2024-12-06T15:36:10.012Z] =================================================================================================================== 00:09:21.319 [2024-12-06T15:36:10.012Z] Total : 181398.85 708.59 0.00 0.00 701.38 298.67 1966.08 00:09:21.319 00:09:21.319 Latency(us) 00:09:21.319 [2024-12-06T15:36:10.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.319 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:21.319 Nvme1n1 : 1.02 8413.08 32.86 0.00 0.00 15126.22 6553.60 26651.31 00:09:21.319 [2024-12-06T15:36:10.012Z] =================================================================================================================== 00:09:21.319 [2024-12-06T15:36:10.012Z] Total : 8413.08 32.86 0.00 0.00 15126.22 6553.60 26651.31 00:09:21.319 18909.00 IOPS, 73.86 MiB/s 00:09:21.319 Latency(us) 00:09:21.319 [2024-12-06T15:36:10.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.319 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:21.319 Nvme1n1 : 1.01 18950.32 74.02 0.00 0.00 6736.89 3208.53 15728.64 00:09:21.319 [2024-12-06T15:36:10.012Z] =================================================================================================================== 00:09:21.319 [2024-12-06T15:36:10.012Z] Total : 18950.32 74.02 0.00 0.00 6736.89 3208.53 15728.64 00:09:21.319 8093.00 IOPS, 31.61 MiB/s 00:09:21.319 Latency(us) 00:09:21.319 [2024-12-06T15:36:10.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.319 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:21.319 Nvme1n1 : 1.01 8209.98 32.07 0.00 0.00 15546.01 4014.08 35170.99 00:09:21.319 [2024-12-06T15:36:10.012Z] =================================================================================================================== 00:09:21.319 [2024-12-06T15:36:10.012Z] Total : 8209.98 32.07 0.00 0.00 15546.01 4014.08 35170.99 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2030045 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2030046 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2030049 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.579 rmmod nvme_tcp 00:09:21.579 rmmod nvme_fabrics 00:09:21.579 rmmod nvme_keyring 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2029679 ']' 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2029679 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2029679 ']' 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2029679 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2029679 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2029679' 00:09:21.579 killing process with pid 2029679 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2029679 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2029679 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.579 16:36:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.121 00:09:24.121 real 0m10.615s 00:09:24.121 user 0m17.637s 00:09:24.121 sys 0m5.561s 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 ************************************ 00:09:24.121 END TEST nvmf_bdev_io_wait 00:09:24.121 ************************************ 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.121 ************************************ 00:09:24.121 START TEST nvmf_queue_depth 00:09:24.121 ************************************ 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:24.121 * Looking for test storage... 00:09:24.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.121 --rc genhtml_branch_coverage=1 00:09:24.121 --rc genhtml_function_coverage=1 00:09:24.121 --rc genhtml_legend=1 00:09:24.121 --rc geninfo_all_blocks=1 00:09:24.121 --rc geninfo_unexecuted_blocks=1 00:09:24.121 00:09:24.121 ' 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.121 --rc genhtml_branch_coverage=1 00:09:24.121 --rc genhtml_function_coverage=1 00:09:24.121 --rc genhtml_legend=1 00:09:24.121 --rc geninfo_all_blocks=1 00:09:24.121 --rc geninfo_unexecuted_blocks=1 00:09:24.121 00:09:24.121 ' 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.121 --rc genhtml_branch_coverage=1 00:09:24.121 --rc genhtml_function_coverage=1 00:09:24.121 --rc genhtml_legend=1 00:09:24.121 --rc geninfo_all_blocks=1 00:09:24.121 --rc geninfo_unexecuted_blocks=1 00:09:24.121 00:09:24.121 ' 00:09:24.121 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.121 --rc genhtml_branch_coverage=1 00:09:24.121 --rc genhtml_function_coverage=1 00:09:24.121 --rc genhtml_legend=1 00:09:24.121 --rc geninfo_all_blocks=1 00:09:24.121 --rc geninfo_unexecuted_blocks=1 00:09:24.122 00:09:24.122 ' 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.122 16:36:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:29.420 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:29.420 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.420 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:29.421 Found net devices under 0000:31:00.0: cvl_0_0 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:29.421 Found net devices under 0000:31:00.1: cvl_0_1 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:29.421 16:36:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:29.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:09:29.421 00:09:29.421 --- 10.0.0.2 ping statistics --- 00:09:29.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.421 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:09:29.421 00:09:29.421 --- 10.0.0.1 ping statistics --- 00:09:29.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.421 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2035195 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2035195 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2035195 ']' 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.421 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:29.421 [2024-12-06 16:36:18.108544] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:29.421 [2024-12-06 16:36:18.108613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.681 [2024-12-06 16:36:18.201797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.681 [2024-12-06 16:36:18.228212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.681 [2024-12-06 16:36:18.228263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.681 [2024-12-06 16:36:18.228272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.681 [2024-12-06 16:36:18.228279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.681 [2024-12-06 16:36:18.228285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.681 [2024-12-06 16:36:18.229029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.250 [2024-12-06 16:36:18.923945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.250 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.510 Malloc0 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.510 [2024-12-06 16:36:18.963679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2035400 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2035400 /var/tmp/bdevperf.sock 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2035400 ']' 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.510 16:36:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:30.510 [2024-12-06 16:36:19.003040] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:30.510 [2024-12-06 16:36:19.003123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035400 ] 00:09:30.510 [2024-12-06 16:36:19.085330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.510 [2024-12-06 16:36:19.114379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.771 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.771 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:30.771 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:30.771 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.771 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.771 NVMe0n1 00:09:30.771 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.771 16:36:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:31.031 Running I/O for 10 seconds... 00:09:32.902 8316.00 IOPS, 32.48 MiB/s [2024-12-06T15:36:22.532Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-06T15:36:23.912Z] 10242.67 IOPS, 40.01 MiB/s [2024-12-06T15:36:24.847Z] 11074.00 IOPS, 43.26 MiB/s [2024-12-06T15:36:25.780Z] 11636.40 IOPS, 45.45 MiB/s [2024-12-06T15:36:26.717Z] 11937.67 IOPS, 46.63 MiB/s [2024-12-06T15:36:27.653Z] 12191.29 IOPS, 47.62 MiB/s [2024-12-06T15:36:28.590Z] 12400.88 IOPS, 48.44 MiB/s [2024-12-06T15:36:29.524Z] 12533.33 IOPS, 48.96 MiB/s [2024-12-06T15:36:29.829Z] 12664.20 IOPS, 49.47 MiB/s 00:09:41.136 Latency(us) 00:09:41.136 [2024-12-06T15:36:29.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.136 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:41.136 Verification LBA range: start 0x0 length 0x4000 00:09:41.136 NVMe0n1 : 10.06 12681.11 49.54 0.00 0.00 80457.40 23592.96 75147.95 00:09:41.136 [2024-12-06T15:36:29.829Z] =================================================================================================================== 00:09:41.136 [2024-12-06T15:36:29.829Z] Total : 12681.11 49.54 0.00 0.00 80457.40 23592.96 75147.95 00:09:41.136 { 00:09:41.136 "results": [ 00:09:41.136 { 00:09:41.136 "job": "NVMe0n1", 00:09:41.136 "core_mask": "0x1", 00:09:41.136 "workload": "verify", 00:09:41.136 "status": "finished", 00:09:41.136 "verify_range": { 00:09:41.136 "start": 0, 00:09:41.136 "length": 16384 00:09:41.136 }, 00:09:41.136 "queue_depth": 1024, 00:09:41.136 "io_size": 4096, 00:09:41.136 "runtime": 10.06205, 00:09:41.136 "iops": 12681.113689556303, 00:09:41.136 "mibps": 49.53560034982931, 00:09:41.136 "io_failed": 0, 00:09:41.136 "io_timeout": 0, 00:09:41.136 "avg_latency_us": 80457.40119228618, 00:09:41.136 "min_latency_us": 23592.96, 00:09:41.136 "max_latency_us": 75147.94666666667 00:09:41.136 } 00:09:41.136 ], 00:09:41.136 "core_count": 1 00:09:41.136 } 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2035400 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2035400 ']' 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2035400 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2035400 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2035400' 00:09:41.136 killing process with pid 2035400 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2035400 00:09:41.136 Received shutdown signal, test time was about 10.000000 seconds 00:09:41.136 00:09:41.136 Latency(us) 00:09:41.136 [2024-12-06T15:36:29.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.136 [2024-12-06T15:36:29.829Z] =================================================================================================================== 00:09:41.136 [2024-12-06T15:36:29.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2035400 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.136 rmmod nvme_tcp 00:09:41.136 rmmod nvme_fabrics 00:09:41.136 rmmod nvme_keyring 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2035195 ']' 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2035195 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2035195 ']' 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2035195 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.136 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2035195 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2035195' 00:09:41.416 killing process with pid 2035195 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2035195 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2035195 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.416 16:36:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.321 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:43.321 00:09:43.321 real 0m19.638s 00:09:43.321 user 0m23.451s 00:09:43.321 sys 0m5.470s 00:09:43.321 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.321 16:36:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.322 ************************************ 00:09:43.322 END TEST nvmf_queue_depth 00:09:43.322 ************************************ 00:09:43.322 16:36:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:43.322 16:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:43.322 16:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.322 16:36:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.581 ************************************ 00:09:43.581 START TEST nvmf_target_multipath 00:09:43.581 ************************************ 00:09:43.581 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:43.581 * Looking for test storage... 00:09:43.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.582 --rc genhtml_branch_coverage=1 00:09:43.582 --rc genhtml_function_coverage=1 00:09:43.582 --rc genhtml_legend=1 00:09:43.582 --rc geninfo_all_blocks=1 00:09:43.582 --rc geninfo_unexecuted_blocks=1 00:09:43.582 00:09:43.582 ' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.582 --rc genhtml_branch_coverage=1 00:09:43.582 --rc genhtml_function_coverage=1 00:09:43.582 --rc genhtml_legend=1 00:09:43.582 --rc geninfo_all_blocks=1 00:09:43.582 --rc geninfo_unexecuted_blocks=1 00:09:43.582 00:09:43.582 ' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.582 --rc genhtml_branch_coverage=1 00:09:43.582 --rc genhtml_function_coverage=1 00:09:43.582 --rc genhtml_legend=1 00:09:43.582 --rc geninfo_all_blocks=1 00:09:43.582 --rc geninfo_unexecuted_blocks=1 00:09:43.582 00:09:43.582 ' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:43.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.582 --rc genhtml_branch_coverage=1 00:09:43.582 --rc genhtml_function_coverage=1 00:09:43.582 --rc genhtml_legend=1 00:09:43.582 --rc geninfo_all_blocks=1 00:09:43.582 --rc geninfo_unexecuted_blocks=1 00:09:43.582 00:09:43.582 ' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.582 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.583 16:36:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:50.154 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:50.154 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:50.154 Found net devices under 0000:31:00.0: cvl_0_0 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:50.154 Found net devices under 0000:31:00.1: cvl_0_1 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:50.154 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:09:50.155 00:09:50.155 --- 10.0.0.2 ping statistics --- 00:09:50.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.155 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:09:50.155 00:09:50.155 --- 10.0.0.1 ping statistics --- 00:09:50.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.155 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:50.155 only one NIC for nvmf test 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.155 rmmod nvme_tcp 00:09:50.155 rmmod nvme_fabrics 00:09:50.155 rmmod nvme_keyring 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.155 16:36:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:51.535 00:09:51.535 real 0m7.945s 00:09:51.535 user 0m1.437s 00:09:51.535 sys 0m4.371s 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:51.535 ************************************ 00:09:51.535 END TEST nvmf_target_multipath 00:09:51.535 ************************************ 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.535 16:36:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:51.535 ************************************ 00:09:51.535 START TEST nvmf_zcopy 00:09:51.535 ************************************ 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:51.535 * Looking for test storage... 00:09:51.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.535 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.536 --rc genhtml_branch_coverage=1 00:09:51.536 --rc genhtml_function_coverage=1 00:09:51.536 --rc genhtml_legend=1 00:09:51.536 --rc geninfo_all_blocks=1 00:09:51.536 --rc geninfo_unexecuted_blocks=1 00:09:51.536 00:09:51.536 ' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.536 --rc genhtml_branch_coverage=1 00:09:51.536 --rc genhtml_function_coverage=1 00:09:51.536 --rc genhtml_legend=1 00:09:51.536 --rc geninfo_all_blocks=1 00:09:51.536 --rc geninfo_unexecuted_blocks=1 00:09:51.536 00:09:51.536 ' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.536 --rc genhtml_branch_coverage=1 00:09:51.536 --rc genhtml_function_coverage=1 00:09:51.536 --rc genhtml_legend=1 00:09:51.536 --rc geninfo_all_blocks=1 00:09:51.536 --rc geninfo_unexecuted_blocks=1 00:09:51.536 00:09:51.536 ' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.536 --rc genhtml_branch_coverage=1 00:09:51.536 --rc genhtml_function_coverage=1 00:09:51.536 --rc genhtml_legend=1 00:09:51.536 --rc geninfo_all_blocks=1 00:09:51.536 --rc geninfo_unexecuted_blocks=1 00:09:51.536 00:09:51.536 ' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:51.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:51.536 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:51.537 16:36:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:58.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:58.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:58.109 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:58.110 Found net devices under 0000:31:00.0: cvl_0_0 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:58.110 Found net devices under 0000:31:00.1: cvl_0_1 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.110 16:36:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:58.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:09:58.110 00:09:58.110 --- 10.0.0.2 ping statistics --- 00:09:58.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.110 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:09:58.110 00:09:58.110 --- 10.0.0.1 ping statistics --- 00:09:58.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.110 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2046769 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2046769 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2046769 ']' 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.110 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:58.110 [2024-12-06 16:36:46.173852] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:58.110 [2024-12-06 16:36:46.173904] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.110 [2024-12-06 16:36:46.261973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.110 [2024-12-06 16:36:46.288091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.110 [2024-12-06 16:36:46.288149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.110 [2024-12-06 16:36:46.288159] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.110 [2024-12-06 16:36:46.288167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.110 [2024-12-06 16:36:46.288174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.110 [2024-12-06 16:36:46.288984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.370 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.370 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:58.370 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.370 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.370 16:36:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.370 [2024-12-06 16:36:47.014849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.370 [2024-12-06 16:36:47.031160] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.370 malloc0 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.370 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.629 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.629 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:58.629 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:58.629 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:58.629 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:58.629 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:58.629 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:58.629 { 00:09:58.629 "params": { 00:09:58.629 "name": "Nvme$subsystem", 00:09:58.629 "trtype": "$TEST_TRANSPORT", 00:09:58.629 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.629 "adrfam": "ipv4", 00:09:58.629 "trsvcid": "$NVMF_PORT", 00:09:58.629 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.629 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.629 "hdgst": ${hdgst:-false}, 00:09:58.629 "ddgst": ${ddgst:-false} 00:09:58.630 }, 00:09:58.630 "method": "bdev_nvme_attach_controller" 00:09:58.630 } 00:09:58.630 EOF 00:09:58.630 )") 00:09:58.630 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:58.630 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:58.630 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:58.630 16:36:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:58.630 "params": { 00:09:58.630 "name": "Nvme1", 00:09:58.630 "trtype": "tcp", 00:09:58.630 "traddr": "10.0.0.2", 00:09:58.630 "adrfam": "ipv4", 00:09:58.630 "trsvcid": "4420", 00:09:58.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:58.630 "hdgst": false, 00:09:58.630 "ddgst": false 00:09:58.630 }, 00:09:58.630 "method": "bdev_nvme_attach_controller" 00:09:58.630 }' 00:09:58.630 [2024-12-06 16:36:47.102232] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:09:58.630 [2024-12-06 16:36:47.102295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046804 ] 00:09:58.630 [2024-12-06 16:36:47.187974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.630 [2024-12-06 16:36:47.216847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.889 Running I/O for 10 seconds... 00:10:01.223 8861.00 IOPS, 69.23 MiB/s [2024-12-06T15:36:50.848Z] 9438.00 IOPS, 73.73 MiB/s [2024-12-06T15:36:51.783Z] 9636.67 IOPS, 75.29 MiB/s [2024-12-06T15:36:52.720Z] 9731.50 IOPS, 76.03 MiB/s [2024-12-06T15:36:53.658Z] 9791.40 IOPS, 76.50 MiB/s [2024-12-06T15:36:54.595Z] 9837.67 IOPS, 76.86 MiB/s [2024-12-06T15:36:55.972Z] 9868.43 IOPS, 77.10 MiB/s [2024-12-06T15:36:56.907Z] 9889.12 IOPS, 77.26 MiB/s [2024-12-06T15:36:57.844Z] 9905.78 IOPS, 77.39 MiB/s [2024-12-06T15:36:57.844Z] 9918.50 IOPS, 77.49 MiB/s 00:10:09.151 Latency(us) 00:10:09.151 [2024-12-06T15:36:57.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.151 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:09.151 Verification LBA range: start 0x0 length 0x1000 00:10:09.151 Nvme1n1 : 10.01 9921.41 77.51 0.00 0.00 12860.52 2116.27 26869.76 00:10:09.151 [2024-12-06T15:36:57.844Z] =================================================================================================================== 00:10:09.151 [2024-12-06T15:36:57.844Z] Total : 9921.41 77.51 0.00 0.00 12860.52 2116.27 26869.76 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2049137 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:09.151 { 00:10:09.151 "params": { 00:10:09.151 "name": "Nvme$subsystem", 00:10:09.151 "trtype": "$TEST_TRANSPORT", 00:10:09.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:09.151 "adrfam": "ipv4", 00:10:09.151 "trsvcid": "$NVMF_PORT", 00:10:09.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:09.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:09.151 "hdgst": ${hdgst:-false}, 00:10:09.151 "ddgst": ${ddgst:-false} 00:10:09.151 }, 00:10:09.151 "method": "bdev_nvme_attach_controller" 00:10:09.151 } 00:10:09.151 EOF 00:10:09.151 )") 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:09.151 [2024-12-06 16:36:57.678742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.151 [2024-12-06 16:36:57.678773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:09.151 16:36:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:09.151 "params": { 00:10:09.151 "name": "Nvme1", 00:10:09.151 "trtype": "tcp", 00:10:09.151 "traddr": "10.0.0.2", 00:10:09.151 "adrfam": "ipv4", 00:10:09.151 "trsvcid": "4420", 00:10:09.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:09.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:09.151 "hdgst": false, 00:10:09.151 "ddgst": false 00:10:09.151 }, 00:10:09.151 "method": "bdev_nvme_attach_controller" 00:10:09.151 }' 00:10:09.151 [2024-12-06 16:36:57.686728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.151 [2024-12-06 16:36:57.686739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.151 [2024-12-06 16:36:57.694745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.151 [2024-12-06 16:36:57.694753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.151 [2024-12-06 16:36:57.702765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.151 [2024-12-06 16:36:57.702774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.151 [2024-12-06 16:36:57.704514] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:09.151 [2024-12-06 16:36:57.704563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049137 ] 00:10:09.151 [2024-12-06 16:36:57.710785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.151 [2024-12-06 16:36:57.710794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.151 [2024-12-06 16:36:57.718805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.718813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.726826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.726834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.734846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.734856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.742868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.742878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.750886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.750894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.758906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.758914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.766928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.766937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.768023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.152 [2024-12-06 16:36:57.774948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.774957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.782969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.782981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.784093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.152 [2024-12-06 16:36:57.790990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.791001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.799015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.799027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.807032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.807043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.815050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.815061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.823067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.823077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.831088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.831097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.152 [2024-12-06 16:36:57.839114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.152 [2024-12-06 16:36:57.839123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.847145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.847162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.855160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.855172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.863178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.863187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.871200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.871211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.879218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.879226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.887237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.887245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.895258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.895266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.903279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.903287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.911299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.911307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.919320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.919330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.927342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.927352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.935363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.935374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.943382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.943397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.987416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.987431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.991519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.991531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:57.999537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:57.999544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 Running I/O for 5 seconds... 00:10:09.411 [2024-12-06 16:36:58.011697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.011714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.019711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.019728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.028459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.028475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.037575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.037591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.046551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.046567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.054932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.054949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.064363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.064379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.073320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.073335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.081820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.081835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.090513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.090529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.411 [2024-12-06 16:36:58.099385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.411 [2024-12-06 16:36:58.099400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.108338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.108353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.117195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.117210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.126150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.126165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.134658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.134675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.143700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.143721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.152239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.152254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.160762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.160777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.169953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.169968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.178906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.178921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.187806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.187821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.196707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.196722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.205804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.205819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.214839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.214855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.223846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.223862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.232657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.232673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.241676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.241691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.250669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.250684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.259594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.259610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.268407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.268423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.277387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.277402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.285555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.285570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.294326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.294341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.303099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.303120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.312326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.312341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.321198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.321213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.329759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.329774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.671 [2024-12-06 16:36:58.338770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.671 [2024-12-06 16:36:58.338785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.672 [2024-12-06 16:36:58.347360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.672 [2024-12-06 16:36:58.347375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.672 [2024-12-06 16:36:58.356210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.672 [2024-12-06 16:36:58.356225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.365258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.365273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.373990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.374005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.382085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.382105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.391003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.391018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.399974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.399989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.408955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.408970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.417893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.417908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.426903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.426918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.435943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.435958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.444631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.444646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.453654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.453670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.462516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.462531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.471057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.471072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.479702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.479717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.488596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.488612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.497210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.497226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.505402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.505417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.514046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.514062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.522978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.522993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.531884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.531899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.539933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.539948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.549406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.549420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.558482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.558497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.567424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.567439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.575828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.575843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.584922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.584938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.593960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.593976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.602848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.602862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.611773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.611788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.932 [2024-12-06 16:36:58.619891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.932 [2024-12-06 16:36:58.619906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.628393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.628408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.636924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.636939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.645817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.645832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.654619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.654634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.663546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.663561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.672057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.672071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.680948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.680963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.689680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.689696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.698228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.698243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.706593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.706608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.715672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.715687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.724218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.724234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.732780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.732796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.741182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.741197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.750007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.750022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.758985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.759000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.767610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.767625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.776666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.776681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.785768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.785783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.794235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.794249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.802943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.802958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.811863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.811878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.820955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.820970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.829552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.829567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.838524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.838538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.847150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.847165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.192 [2024-12-06 16:36:58.855997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.192 [2024-12-06 16:36:58.856011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.193 [2024-12-06 16:36:58.864968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.193 [2024-12-06 16:36:58.864982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.193 [2024-12-06 16:36:58.873979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.193 [2024-12-06 16:36:58.873993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.193 [2024-12-06 16:36:58.882895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.193 [2024-12-06 16:36:58.882910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.451 [2024-12-06 16:36:58.891914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.891929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.900471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.900487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.908984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.908999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.917509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.917524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.926277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.926292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.934961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.934976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.943683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.943698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.952765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.952780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.961124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.961139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.970236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.970255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.978565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.978580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.987871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.987886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:58.996365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:58.996380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 19561.00 IOPS, 152.82 MiB/s [2024-12-06T15:36:59.145Z] [2024-12-06 16:36:59.005248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.005262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.013846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.013861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.022803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.022818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.031854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.031869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.040890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.040904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.049400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.049415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.058763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.058778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.067227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.067242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.075873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.075888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.084457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.084473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.093585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.093599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.102344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.102359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.111298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.111313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.119892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.119906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.128565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.128580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.452 [2024-12-06 16:36:59.137484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.452 [2024-12-06 16:36:59.137503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.711 [2024-12-06 16:36:59.146237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.711 [2024-12-06 16:36:59.146253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.711 [2024-12-06 16:36:59.154723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.711 [2024-12-06 16:36:59.154737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.711 [2024-12-06 16:36:59.163878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.711 [2024-12-06 16:36:59.163893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.711 [2024-12-06 16:36:59.172856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.172870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.181845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.181860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.191102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.191118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.200045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.200060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.208929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.208945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.217899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.217915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.226531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.226547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.235688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.235704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.243995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.244010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.252519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.252534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.261437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.261452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.270320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.270336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.279430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.279445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.288288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.288303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.296849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.296864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.305551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.305570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.313633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.313648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.322130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.322146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.330922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.330937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.339849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.339864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.348433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.348448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.357256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.357272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.365881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.365897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.374825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.374840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.383238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.383253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.392001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.392017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.712 [2024-12-06 16:36:59.400902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.712 [2024-12-06 16:36:59.400917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.409180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.409195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.418380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.418396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.426160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.426175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.435291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.435306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.444245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.444260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.452570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.452584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.461636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.461652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.470393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.470408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.479276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.479291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.487765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.487780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.496710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.496725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.505366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.505381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.514256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.514271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.522640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.522654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.531087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.531108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.539998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.540013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.548669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.548684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.557690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.557705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.566147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.566163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.574682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.574698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.583448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.583463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.592138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.592154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.600806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.600821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.609691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.609706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.618315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.618331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.627140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.627155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.635822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.635837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.644066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.644082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.653083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.653098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.971 [2024-12-06 16:36:59.661818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.971 [2024-12-06 16:36:59.661834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.670680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.670695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.678950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.678965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.687470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.687486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.696384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.696399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.705354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.705369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.714224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.714239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.723078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.723093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.231 [2024-12-06 16:36:59.731829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.231 [2024-12-06 16:36:59.731844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.740189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.740204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.749081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.749096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.757935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.757950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.766715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.766730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.775823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.775837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.784783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.784797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.793439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.793454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.802401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.802416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.811285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.811300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.820063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.820078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.828279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.828295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.836885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.836900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.846023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.846038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.854364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.854379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.863519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.863533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.872446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.872461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.881360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.881375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.890333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.890347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.899233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.899247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.907820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.907836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.232 [2024-12-06 16:36:59.916698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.232 [2024-12-06 16:36:59.916713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.925928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.925943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.935000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.935015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.943601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.943616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.952179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.952194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.961460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.961474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.970287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.970302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.979266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.979281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.988193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.988207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:36:59.997099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:36:59.997117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.005262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.005279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 19613.50 IOPS, 153.23 MiB/s [2024-12-06T15:37:00.186Z] [2024-12-06 16:37:00.013892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.013907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.023125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.023140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.031584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.031600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.040281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.040297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.049371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.049386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.058437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.058452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.067056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.067071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.075285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.075301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.084114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.084130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.093270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.093285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.102403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.102418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.111300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.111314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.120289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.120304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.128556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.128575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.138037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.138052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.146634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.146649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.493 [2024-12-06 16:37:00.155358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.493 [2024-12-06 16:37:00.155373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.494 [2024-12-06 16:37:00.163890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.494 [2024-12-06 16:37:00.163905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.494 [2024-12-06 16:37:00.173113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.494 [2024-12-06 16:37:00.173128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.494 [2024-12-06 16:37:00.182158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.494 [2024-12-06 16:37:00.182174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.191183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.191198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.200015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.200030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.209052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.209068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.217929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.217945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.226756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.226772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.235676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.235692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.244313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.244328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.253075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.253091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.262085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.262104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.271036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.271051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.280118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.280133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.288995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.289011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.297687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.297706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.306625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.306640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.315575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.315589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.324515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.324530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.333642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.333657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.342461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.342476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.351588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.351603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.360310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.360324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.369358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.369374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.377805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.377821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.386746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.386761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.395598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.395613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.404492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.404506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.413346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.413360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.421599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.421614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.430478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.430493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:11.753 [2024-12-06 16:37:00.439503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:11.753 [2024-12-06 16:37:00.439518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.012 [2024-12-06 16:37:00.448213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.012 [2024-12-06 16:37:00.448228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.012 [2024-12-06 16:37:00.457104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.012 [2024-12-06 16:37:00.457119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.012 [2024-12-06 16:37:00.465714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.012 [2024-12-06 16:37:00.465736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.012 [2024-12-06 16:37:00.474255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.012 [2024-12-06 16:37:00.474270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.012 [2024-12-06 16:37:00.483619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.012 [2024-12-06 16:37:00.483634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.012 [2024-12-06 16:37:00.492070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.012 [2024-12-06 16:37:00.492085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.500794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.500809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.509500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.509515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.517938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.517953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.526221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.526236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.535330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.535345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.544219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.544234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.552981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.552995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.561390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.561405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.570468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.570483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.578751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.578766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.587846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.587861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.596839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.596854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.605877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.605893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.614749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.614765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.623757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.623772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.632751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.632766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.641104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.641119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.649985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.650000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.658711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.658725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.667771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.667786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.676503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.676519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.685796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.685811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.694118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.694133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.013 [2024-12-06 16:37:00.702757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.013 [2024-12-06 16:37:00.702772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.711739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.711754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.720701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.720716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.729371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.729387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.737715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.737730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.746584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.746599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.755540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.755555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.763913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.763928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.773007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.773022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.781941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.781956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.790776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.790791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.799755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.799770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.808555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.808572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.817645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.817660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.826458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.826472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.835573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.835588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.844475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.844490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.853511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.853525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.862356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.862371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.870955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.870971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.879929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.879944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.889015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.889030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.897962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.897978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.906570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.906585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.915338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.915353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.924289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.924304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.932872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.932887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.941834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.941849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.950629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.950644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.273 [2024-12-06 16:37:00.959603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.273 [2024-12-06 16:37:00.959618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:00.967757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:00.967772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:00.976605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:00.976620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:00.985263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:00.985278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:00.993779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:00.993794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.002658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.002674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 19650.00 IOPS, 153.52 MiB/s [2024-12-06T15:37:01.226Z] [2024-12-06 16:37:01.011450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.011466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.020459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.020474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.029240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.029256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.037669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.037685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.046886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.046901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.055835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.055850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.064504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.064519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.073061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.073077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.081954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.081969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.090875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.090890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.099734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.099749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.108522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.108537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.117469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.117484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.126398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.126417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.135382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.135398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.144023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.144038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.153155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.153170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.161519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.161534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.170489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.170504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.178934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.178949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.187956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.187971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.196885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.196900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.205824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.205840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.214259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.214275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.533 [2024-12-06 16:37:01.222512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.533 [2024-12-06 16:37:01.222527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.792 [2024-12-06 16:37:01.230876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.792 [2024-12-06 16:37:01.230892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.792 [2024-12-06 16:37:01.240028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.792 [2024-12-06 16:37:01.240043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.792 [2024-12-06 16:37:01.248787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.792 [2024-12-06 16:37:01.248802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.792 [2024-12-06 16:37:01.257734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.792 [2024-12-06 16:37:01.257749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.792 [2024-12-06 16:37:01.266538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.792 [2024-12-06 16:37:01.266553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.792 [2024-12-06 16:37:01.275406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.792 [2024-12-06 16:37:01.275421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.792 [2024-12-06 16:37:01.284445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.284460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.293521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.293539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.302507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.302522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.310949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.310964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.320189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.320204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.329238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.329253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.338332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.338347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.347236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.347251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.355516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.355532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.364553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.364568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.373491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.373506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.382343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.382358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.390751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.390766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.399769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.399783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.408594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.408609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.417651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.417666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.426424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.426439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.435107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.435122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.443795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.443810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.452212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.452227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.460987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.461005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.469280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.469294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:12.793 [2024-12-06 16:37:01.477863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:12.793 [2024-12-06 16:37:01.477878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.486389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.486404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.495419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.495434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.503863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.503878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.513010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.513026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.521978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.521994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.530827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.530842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.539940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.539955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.548953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.548968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.557851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.557866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.566678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.566693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.575528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.575544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.584330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.584345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.593289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.593304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.601887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.601902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.610755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.610770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.619798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.619813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.628850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.628868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.637529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.637543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.645937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.645952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.654909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.654924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.663195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.663210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.672339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.672354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.681345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.681360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.690091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.690110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.698870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.698885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.707542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.707557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.716457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.716472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.725535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.725549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.734479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.734493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.052 [2024-12-06 16:37:01.743498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.052 [2024-12-06 16:37:01.743513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.751809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.751824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.760555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.760570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.769453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.769468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.777961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.777975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.786694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.786709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.795845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.795860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.803693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.803707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.812829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.812845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.821808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.821823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.830795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.830810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.839165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.839179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.847956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.847971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.857000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.857015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.865908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.865923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.874846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.874860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.883717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.883732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.892451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.892466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.901466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.901481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.910330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.910345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.919323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.919338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.928134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.928149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.936654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.936669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.945386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.945401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.953872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.953886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.962981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.962995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.971940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.971955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.980899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.980914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:01.989936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:01.989950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.312 [2024-12-06 16:37:02.003194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.312 [2024-12-06 16:37:02.003210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.011437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.011452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 19653.75 IOPS, 153.54 MiB/s [2024-12-06T15:37:02.265Z] [2024-12-06 16:37:02.019879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.019894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.029350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.029364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.038209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.038224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.047107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.047122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.056069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.056084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.064894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.064909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.073680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.073695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.082423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.082438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.091159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.091174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.100325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.100340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.108402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.108416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.117492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.117506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.126381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.126400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.135497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.135513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.144412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.144427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.153499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.153514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.162475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.162490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.171363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.171378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.180153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.180167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.189072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.189087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.197796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.197811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.206426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.206440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.215394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.215410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.224230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.224246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.232742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.232758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.241817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.241832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.250514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.250529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.572 [2024-12-06 16:37:02.259292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.572 [2024-12-06 16:37:02.259307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.268161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.268177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.276161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.276177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.285233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.285248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.294098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.294123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.302934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.302949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.311835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.311850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.320819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.320834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.329578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.329593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.338523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.338538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.347133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.347148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.356004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.356019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.364954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.364969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.373496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.373511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.381943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.381958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.391038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.391053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.399770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.399785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.408696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.408711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.417567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.417582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.426371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.426386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.434594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.434609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.443591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.443606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.452338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.452353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.460935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.460954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.469861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.469876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.478679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.478694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.487663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.487678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.496520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.496535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.505229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.505244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.513932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.513948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:13.832 [2024-12-06 16:37:02.522760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:13.832 [2024-12-06 16:37:02.522775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.092 [2024-12-06 16:37:02.531823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.092 [2024-12-06 16:37:02.531838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.092 [2024-12-06 16:37:02.540330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.092 [2024-12-06 16:37:02.540344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.092 [2024-12-06 16:37:02.548824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.092 [2024-12-06 16:37:02.548839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.092 [2024-12-06 16:37:02.557524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.092 [2024-12-06 16:37:02.557539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.092 [2024-12-06 16:37:02.566089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.092 [2024-12-06 16:37:02.566109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.575043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.575058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.583940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.583955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.592956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.592971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.601486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.601501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.610631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.610646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.619046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.619061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.628142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.628161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.637004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.637019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.645885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.645900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.654710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.654726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.663494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.663510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.672446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.672462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.681473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.681489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.690107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.690123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.698713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.698728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.707892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.707907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.716707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.716722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.725507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.725522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.734556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.734571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.742997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.743012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.751707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.751721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.760536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.760551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.769589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.769603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.093 [2024-12-06 16:37:02.778364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.093 [2024-12-06 16:37:02.778379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.787089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.787109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.796395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.796409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.804772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.804786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.813653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.813668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.822617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.822632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.831609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.831624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.840260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.840275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.353 [2024-12-06 16:37:02.848796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.353 [2024-12-06 16:37:02.848810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.857875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.857890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.866840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.866855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.875894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.875908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.884539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.884553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.893071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.893087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.902009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.902023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.910972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.910987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.920031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.920046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.929090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.929109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.937971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.937985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.946922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.946937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.955978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.955992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.963923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.963937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.972949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.972964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.981667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.981682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.990557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.990572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:02.999000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:02.999015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:03.007719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:03.007734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 19667.40 IOPS, 153.65 MiB/s [2024-12-06T15:37:03.047Z] [2024-12-06 16:37:03.015813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:03.015828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 00:10:14.354 Latency(us) 00:10:14.354 [2024-12-06T15:37:03.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.354 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:14.354 Nvme1n1 : 5.01 19667.51 153.65 0.00 0.00 6502.26 2921.81 14527.15 00:10:14.354 [2024-12-06T15:37:03.047Z] =================================================================================================================== 00:10:14.354 [2024-12-06T15:37:03.047Z] Total : 19667.51 153.65 0.00 0.00 6502.26 2921.81 14527.15 00:10:14.354 [2024-12-06 16:37:03.022067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:03.022081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:03.030086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:03.030098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.354 [2024-12-06 16:37:03.038111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.354 [2024-12-06 16:37:03.038121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.046135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.046148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.054151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.054163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.062167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.062177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.070193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.070203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.078206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.078215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.086226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.086241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.094246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.094256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.102269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.102280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 [2024-12-06 16:37:03.110287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.614 [2024-12-06 16:37:03.110295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2049137) - No such process 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2049137 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.614 delay0 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.614 16:37:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:14.614 [2024-12-06 16:37:03.225447] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:21.181 Initializing NVMe Controllers 00:10:21.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:21.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:21.181 Initialization complete. Launching workers. 00:10:21.181 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 241, failed: 26181 00:10:21.181 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26298, failed to submit 124 00:10:21.181 success 26238, unsuccessful 60, failed 0 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.181 rmmod nvme_tcp 00:10:21.181 rmmod nvme_fabrics 00:10:21.181 rmmod nvme_keyring 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2046769 ']' 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2046769 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2046769 ']' 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2046769 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046769 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046769' 00:10:21.181 killing process with pid 2046769 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2046769 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2046769 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.181 16:37:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:23.151 00:10:23.151 real 0m31.665s 00:10:23.151 user 0m43.157s 00:10:23.151 sys 0m9.556s 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:23.151 ************************************ 00:10:23.151 END TEST nvmf_zcopy 00:10:23.151 ************************************ 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.151 ************************************ 00:10:23.151 START TEST nvmf_nmic 00:10:23.151 ************************************ 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:23.151 * Looking for test storage... 00:10:23.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.151 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.456 --rc genhtml_branch_coverage=1 00:10:23.456 --rc genhtml_function_coverage=1 00:10:23.456 --rc genhtml_legend=1 00:10:23.456 --rc geninfo_all_blocks=1 00:10:23.456 --rc geninfo_unexecuted_blocks=1 00:10:23.456 00:10:23.456 ' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.456 --rc genhtml_branch_coverage=1 00:10:23.456 --rc genhtml_function_coverage=1 00:10:23.456 --rc genhtml_legend=1 00:10:23.456 --rc geninfo_all_blocks=1 00:10:23.456 --rc geninfo_unexecuted_blocks=1 00:10:23.456 00:10:23.456 ' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.456 --rc genhtml_branch_coverage=1 00:10:23.456 --rc genhtml_function_coverage=1 00:10:23.456 --rc genhtml_legend=1 00:10:23.456 --rc geninfo_all_blocks=1 00:10:23.456 --rc geninfo_unexecuted_blocks=1 00:10:23.456 00:10:23.456 ' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.456 --rc genhtml_branch_coverage=1 00:10:23.456 --rc genhtml_function_coverage=1 00:10:23.456 --rc genhtml_legend=1 00:10:23.456 --rc geninfo_all_blocks=1 00:10:23.456 --rc geninfo_unexecuted_blocks=1 00:10:23.456 00:10:23.456 ' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.456 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:23.457 16:37:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:28.734 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:28.735 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:28.735 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:28.735 Found net devices under 0000:31:00.0: cvl_0_0 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:28.735 Found net devices under 0000:31:00.1: cvl_0_1 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:28.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:10:28.735 00:10:28.735 --- 10.0.0.2 ping statistics --- 00:10:28.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.735 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:10:28.735 00:10:28.735 --- 10.0.0.1 ping statistics --- 00:10:28.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.735 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.735 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2056154 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2056154 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2056154 ']' 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.736 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.736 [2024-12-06 16:37:17.366530] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:28.736 [2024-12-06 16:37:17.366601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.996 [2024-12-06 16:37:17.446506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.996 [2024-12-06 16:37:17.468778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.996 [2024-12-06 16:37:17.468823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.996 [2024-12-06 16:37:17.468829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.996 [2024-12-06 16:37:17.468834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.996 [2024-12-06 16:37:17.468839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.996 [2024-12-06 16:37:17.470358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.996 [2024-12-06 16:37:17.470514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.996 [2024-12-06 16:37:17.470670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.996 [2024-12-06 16:37:17.470672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.996 [2024-12-06 16:37:17.573575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.996 Malloc0 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.996 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.997 [2024-12-06 16:37:17.629128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:28.997 test case1: single bdev can't be used in multiple subsystems 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.997 [2024-12-06 16:37:17.652998] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:28.997 [2024-12-06 16:37:17.653015] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:28.997 [2024-12-06 16:37:17.653020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.997 request: 00:10:28.997 { 00:10:28.997 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:28.997 "namespace": { 00:10:28.997 "bdev_name": "Malloc0", 00:10:28.997 "no_auto_visible": false, 00:10:28.997 "hide_metadata": false 00:10:28.997 }, 00:10:28.997 "method": "nvmf_subsystem_add_ns", 00:10:28.997 "req_id": 1 00:10:28.997 } 00:10:28.997 Got JSON-RPC error response 00:10:28.997 response: 00:10:28.997 { 00:10:28.997 "code": -32602, 00:10:28.997 "message": "Invalid parameters" 00:10:28.997 } 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:28.997 Adding namespace failed - expected result. 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:28.997 test case2: host connect to nvmf target in multiple paths 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.997 [2024-12-06 16:37:17.661112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.997 16:37:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.902 16:37:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:32.282 16:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:32.282 16:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:32.282 16:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.282 16:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:32.282 16:37:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:34.190 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:34.190 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:34.190 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.190 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:34.190 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.190 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:34.190 16:37:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:34.190 [global] 00:10:34.190 thread=1 00:10:34.190 invalidate=1 00:10:34.190 rw=write 00:10:34.190 time_based=1 00:10:34.190 runtime=1 00:10:34.190 ioengine=libaio 00:10:34.190 direct=1 00:10:34.190 bs=4096 00:10:34.190 iodepth=1 00:10:34.190 norandommap=0 00:10:34.190 numjobs=1 00:10:34.190 00:10:34.190 verify_dump=1 00:10:34.190 verify_backlog=512 00:10:34.190 verify_state_save=0 00:10:34.190 do_verify=1 00:10:34.190 verify=crc32c-intel 00:10:34.190 [job0] 00:10:34.190 filename=/dev/nvme0n1 00:10:34.190 Could not set queue depth (nvme0n1) 00:10:34.449 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:34.449 fio-3.35 00:10:34.449 Starting 1 thread 00:10:35.830 00:10:35.830 job0: (groupid=0, jobs=1): err= 0: pid=2057697: Fri Dec 6 16:37:24 2024 00:10:35.830 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:35.830 slat (nsec): min=3798, max=40662, avg=16074.67, stdev=5515.47 00:10:35.831 clat (usec): min=403, max=41387, avg=1487.51, stdev=4985.69 00:10:35.831 lat (usec): min=415, max=41428, avg=1503.58, stdev=4987.21 00:10:35.831 clat percentiles (usec): 00:10:35.831 | 1.00th=[ 668], 5.00th=[ 725], 10.00th=[ 758], 20.00th=[ 799], 00:10:35.831 | 30.00th=[ 832], 40.00th=[ 857], 50.00th=[ 873], 60.00th=[ 889], 00:10:35.831 | 70.00th=[ 906], 80.00th=[ 930], 90.00th=[ 947], 95.00th=[ 979], 00:10:35.831 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:35.831 | 99.99th=[41157] 00:10:35.831 write: IOPS=667, BW=2669KiB/s (2733kB/s)(2672KiB/1001msec); 0 zone resets 00:10:35.831 slat (nsec): min=3441, max=51197, avg=14639.61, stdev=9862.69 00:10:35.831 clat (usec): min=86, max=691, avg=322.66, stdev=125.28 00:10:35.831 lat (usec): min=98, max=742, avg=337.30, stdev=131.06 00:10:35.831 clat percentiles (usec): 00:10:35.831 | 1.00th=[ 119], 5.00th=[ 126], 10.00th=[ 178], 20.00th=[ 215], 00:10:35.831 | 30.00th=[ 247], 40.00th=[ 273], 50.00th=[ 297], 60.00th=[ 351], 00:10:35.831 | 70.00th=[ 383], 80.00th=[ 424], 90.00th=[ 502], 95.00th=[ 545], 00:10:35.831 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 693], 99.95th=[ 693], 00:10:35.831 | 99.99th=[ 693] 00:10:35.831 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:35.831 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:35.831 lat (usec) : 100=0.17%, 250=17.88%, 500=32.88%, 750=9.32%, 1000=38.39% 00:10:35.831 lat (msec) : 2=0.68%, 50=0.68% 00:10:35.831 cpu : usr=1.60%, sys=2.50%, ctx=1180, majf=0, minf=1 00:10:35.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:35.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:35.831 issued rwts: total=512,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:35.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:35.831 00:10:35.831 Run status group 0 (all jobs): 00:10:35.831 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:35.831 WRITE: bw=2669KiB/s (2733kB/s), 2669KiB/s-2669KiB/s (2733kB/s-2733kB/s), io=2672KiB (2736kB), run=1001-1001msec 00:10:35.831 00:10:35.831 Disk stats (read/write): 00:10:35.831 nvme0n1: ios=534/512, merge=0/0, ticks=751/116, in_queue=867, util=93.29% 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.831 rmmod nvme_tcp 00:10:35.831 rmmod nvme_fabrics 00:10:35.831 rmmod nvme_keyring 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2056154 ']' 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2056154 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2056154 ']' 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2056154 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2056154 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2056154' 00:10:35.831 killing process with pid 2056154 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2056154 00:10:35.831 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2056154 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.090 16:37:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.990 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.990 00:10:37.990 real 0m14.924s 00:10:37.990 user 0m45.318s 00:10:37.990 sys 0m4.834s 00:10:37.990 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.990 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.990 ************************************ 00:10:37.990 END TEST nvmf_nmic 00:10:37.990 ************************************ 00:10:37.990 16:37:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:37.990 16:37:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.990 16:37:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.990 16:37:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.250 ************************************ 00:10:38.250 START TEST nvmf_fio_target 00:10:38.250 ************************************ 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:38.250 * Looking for test storage... 00:10:38.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:38.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.250 --rc genhtml_branch_coverage=1 00:10:38.250 --rc genhtml_function_coverage=1 00:10:38.250 --rc genhtml_legend=1 00:10:38.250 --rc geninfo_all_blocks=1 00:10:38.250 --rc geninfo_unexecuted_blocks=1 00:10:38.250 00:10:38.250 ' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:38.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.250 --rc genhtml_branch_coverage=1 00:10:38.250 --rc genhtml_function_coverage=1 00:10:38.250 --rc genhtml_legend=1 00:10:38.250 --rc geninfo_all_blocks=1 00:10:38.250 --rc geninfo_unexecuted_blocks=1 00:10:38.250 00:10:38.250 ' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:38.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.250 --rc genhtml_branch_coverage=1 00:10:38.250 --rc genhtml_function_coverage=1 00:10:38.250 --rc genhtml_legend=1 00:10:38.250 --rc geninfo_all_blocks=1 00:10:38.250 --rc geninfo_unexecuted_blocks=1 00:10:38.250 00:10:38.250 ' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:38.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.250 --rc genhtml_branch_coverage=1 00:10:38.250 --rc genhtml_function_coverage=1 00:10:38.250 --rc genhtml_legend=1 00:10:38.250 --rc geninfo_all_blocks=1 00:10:38.250 --rc geninfo_unexecuted_blocks=1 00:10:38.250 00:10:38.250 ' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.250 16:37:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.528 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:43.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:43.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:43.529 Found net devices under 0000:31:00.0: cvl_0_0 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:43.529 Found net devices under 0000:31:00.1: cvl_0_1 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.529 16:37:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.562 ms 00:10:43.529 00:10:43.529 --- 10.0.0.2 ping statistics --- 00:10:43.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.529 rtt min/avg/max/mdev = 0.562/0.562/0.562/0.000 ms 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:10:43.529 00:10:43.529 --- 10.0.0.1 ping statistics --- 00:10:43.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.529 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2062381 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2062381 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2062381 ']' 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.529 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.530 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.530 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.530 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.530 [2024-12-06 16:37:32.181021] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:10:43.530 [2024-12-06 16:37:32.181078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.788 [2024-12-06 16:37:32.268578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.788 [2024-12-06 16:37:32.296816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.788 [2024-12-06 16:37:32.296866] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.788 [2024-12-06 16:37:32.296875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.788 [2024-12-06 16:37:32.296882] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.788 [2024-12-06 16:37:32.296888] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.788 [2024-12-06 16:37:32.299121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.788 [2024-12-06 16:37:32.299231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.788 [2024-12-06 16:37:32.299392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.788 [2024-12-06 16:37:32.299393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.356 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.356 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:44.356 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.356 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.356 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.356 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.356 16:37:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:44.614 [2024-12-06 16:37:33.127524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.614 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.874 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:44.874 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:44.874 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:44.874 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.134 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:45.134 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.393 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:45.393 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:45.393 16:37:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.652 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:45.652 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.652 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:45.652 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.910 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:45.910 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:46.168 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:46.168 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:46.168 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.426 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:46.426 16:37:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.684 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.684 [2024-12-06 16:37:35.280786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.684 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:46.942 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:46.942 16:37:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.844 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:48.844 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.844 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.844 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:48.844 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:48.844 16:37:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.773 16:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.773 16:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.773 16:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.773 16:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:50.773 16:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.773 16:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:50.773 16:37:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:50.773 [global] 00:10:50.773 thread=1 00:10:50.773 invalidate=1 00:10:50.773 rw=write 00:10:50.773 time_based=1 00:10:50.773 runtime=1 00:10:50.773 ioengine=libaio 00:10:50.773 direct=1 00:10:50.773 bs=4096 00:10:50.773 iodepth=1 00:10:50.773 norandommap=0 00:10:50.773 numjobs=1 00:10:50.773 00:10:50.773 verify_dump=1 00:10:50.773 verify_backlog=512 00:10:50.773 verify_state_save=0 00:10:50.773 do_verify=1 00:10:50.773 verify=crc32c-intel 00:10:50.773 [job0] 00:10:50.773 filename=/dev/nvme0n1 00:10:50.773 [job1] 00:10:50.773 filename=/dev/nvme0n2 00:10:50.773 [job2] 00:10:50.773 filename=/dev/nvme0n3 00:10:50.773 [job3] 00:10:50.773 filename=/dev/nvme0n4 00:10:50.773 Could not set queue depth (nvme0n1) 00:10:50.773 Could not set queue depth (nvme0n2) 00:10:50.773 Could not set queue depth (nvme0n3) 00:10:50.773 Could not set queue depth (nvme0n4) 00:10:51.034 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.034 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.034 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.034 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.034 fio-3.35 00:10:51.034 Starting 4 threads 00:10:52.412 00:10:52.412 job0: (groupid=0, jobs=1): err= 0: pid=2064300: Fri Dec 6 16:37:40 2024 00:10:52.412 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1029msec) 00:10:52.412 slat (nsec): min=11507, max=26928, avg=24760.76, stdev=4984.29 00:10:52.412 clat (usec): min=41092, max=42038, avg=41911.96, stdev=218.93 00:10:52.412 lat (usec): min=41103, max=42062, avg=41936.72, stdev=221.95 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:10:52.412 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:52.412 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:52.412 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:52.412 | 99.99th=[42206] 00:10:52.412 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:52.412 slat (nsec): min=4124, max=51490, avg=19220.69, stdev=10686.32 00:10:52.412 clat (usec): min=265, max=871, avg=591.24, stdev=110.61 00:10:52.412 lat (usec): min=270, max=886, avg=610.46, stdev=113.07 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[ 326], 5.00th=[ 412], 10.00th=[ 445], 20.00th=[ 490], 00:10:52.412 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 644], 00:10:52.412 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:10:52.412 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 873], 99.95th=[ 873], 00:10:52.412 | 99.99th=[ 873] 00:10:52.412 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.412 lat (usec) : 500=21.55%, 750=70.32%, 1000=4.91% 00:10:52.412 lat (msec) : 50=3.21% 00:10:52.412 cpu : usr=0.19%, sys=1.17%, ctx=532, majf=0, minf=1 00:10:52.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.412 job1: (groupid=0, jobs=1): err= 0: pid=2064301: Fri Dec 6 16:37:40 2024 00:10:52.412 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1029msec) 00:10:52.412 slat (nsec): min=10535, max=25814, avg=23833.29, stdev=3719.94 00:10:52.412 clat (usec): min=40822, max=42088, avg=41843.93, stdev=354.80 00:10:52.412 lat (usec): min=40832, max=42107, avg=41867.76, stdev=357.19 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:10:52.412 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:52.412 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:52.412 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:52.412 | 99.99th=[42206] 00:10:52.412 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:52.412 slat (nsec): min=4004, max=49779, avg=18275.61, stdev=10016.34 00:10:52.412 clat (usec): min=166, max=935, avg=596.88, stdev=114.21 00:10:52.412 lat (usec): min=180, max=966, avg=615.16, stdev=116.56 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[ 293], 5.00th=[ 416], 10.00th=[ 449], 20.00th=[ 506], 00:10:52.412 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 644], 00:10:52.412 | 70.00th=[ 676], 80.00th=[ 701], 90.00th=[ 725], 95.00th=[ 758], 00:10:52.412 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 938], 99.95th=[ 938], 00:10:52.412 | 99.99th=[ 938] 00:10:52.412 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.412 lat (usec) : 250=0.38%, 500=17.77%, 750=72.78%, 1000=5.86% 00:10:52.412 lat (msec) : 50=3.21% 00:10:52.412 cpu : usr=0.19%, sys=1.07%, ctx=529, majf=0, minf=2 00:10:52.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.412 job2: (groupid=0, jobs=1): err= 0: pid=2064302: Fri Dec 6 16:37:40 2024 00:10:52.412 read: IOPS=16, BW=66.0KiB/s (67.5kB/s)(68.0KiB/1031msec) 00:10:52.412 slat (nsec): min=10892, max=27145, avg=24990.24, stdev=5270.96 00:10:52.412 clat (usec): min=40936, max=42007, avg=41343.99, stdev=468.45 00:10:52.412 lat (usec): min=40963, max=42034, avg=41368.98, stdev=469.07 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:52.412 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:52.412 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:52.412 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:52.412 | 99.99th=[42206] 00:10:52.412 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:52.412 slat (nsec): min=4267, max=52261, avg=19757.81, stdev=10529.13 00:10:52.412 clat (usec): min=270, max=1263, avg=612.81, stdev=132.09 00:10:52.412 lat (usec): min=285, max=1300, avg=632.57, stdev=134.38 00:10:52.412 clat percentiles (usec): 00:10:52.412 | 1.00th=[ 318], 5.00th=[ 420], 10.00th=[ 449], 20.00th=[ 506], 00:10:52.412 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 652], 00:10:52.412 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 791], 95.00th=[ 832], 00:10:52.412 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 1270], 99.95th=[ 1270], 00:10:52.412 | 99.99th=[ 1270] 00:10:52.412 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.412 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.412 lat (usec) : 500=17.96%, 750=63.52%, 1000=15.12% 00:10:52.412 lat (msec) : 2=0.19%, 50=3.21% 00:10:52.412 cpu : usr=0.58%, sys=0.78%, ctx=530, majf=0, minf=1 00:10:52.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.412 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.412 job3: (groupid=0, jobs=1): err= 0: pid=2064303: Fri Dec 6 16:37:40 2024 00:10:52.413 read: IOPS=18, BW=75.3KiB/s (77.1kB/s)(76.0KiB/1009msec) 00:10:52.413 slat (nsec): min=11334, max=28048, avg=25158.53, stdev=5757.42 00:10:52.413 clat (usec): min=1008, max=42071, avg=39697.38, stdev=9373.73 00:10:52.413 lat (usec): min=1020, max=42098, avg=39722.54, stdev=9377.05 00:10:52.413 clat percentiles (usec): 00:10:52.413 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41681], 00:10:52.413 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:10:52.413 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:52.413 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:52.413 | 99.99th=[42206] 00:10:52.413 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:52.413 slat (nsec): min=3638, max=51857, avg=19773.42, stdev=10859.72 00:10:52.413 clat (usec): min=163, max=2200, avg=469.78, stdev=127.40 00:10:52.413 lat (usec): min=178, max=2215, avg=489.56, stdev=129.46 00:10:52.413 clat percentiles (usec): 00:10:52.413 | 1.00th=[ 251], 5.00th=[ 302], 10.00th=[ 338], 20.00th=[ 383], 00:10:52.413 | 30.00th=[ 408], 40.00th=[ 429], 50.00th=[ 453], 60.00th=[ 498], 00:10:52.413 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 644], 00:10:52.413 | 99.00th=[ 717], 99.50th=[ 766], 99.90th=[ 2212], 99.95th=[ 2212], 00:10:52.413 | 99.99th=[ 2212] 00:10:52.413 bw ( KiB/s): min= 4096, max= 4096, per=51.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:52.413 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:52.413 lat (usec) : 250=1.13%, 500=57.82%, 750=36.53%, 1000=0.75% 00:10:52.413 lat (msec) : 2=0.19%, 4=0.19%, 50=3.39% 00:10:52.413 cpu : usr=0.40%, sys=1.59%, ctx=533, majf=0, minf=1 00:10:52.413 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.413 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.413 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.413 00:10:52.413 Run status group 0 (all jobs): 00:10:52.413 READ: bw=272KiB/s (278kB/s), 66.0KiB/s-75.3KiB/s (67.5kB/s-77.1kB/s), io=280KiB (287kB), run=1009-1031msec 00:10:52.413 WRITE: bw=7946KiB/s (8136kB/s), 1986KiB/s-2030KiB/s (2034kB/s-2078kB/s), io=8192KiB (8389kB), run=1009-1031msec 00:10:52.413 00:10:52.413 Disk stats (read/write): 00:10:52.413 nvme0n1: ios=67/512, merge=0/0, ticks=1295/291, in_queue=1586, util=96.89% 00:10:52.413 nvme0n2: ios=54/512, merge=0/0, ticks=761/286, in_queue=1047, util=93.50% 00:10:52.413 nvme0n3: ios=70/512, merge=0/0, ticks=1456/300, in_queue=1756, util=98.75% 00:10:52.413 nvme0n4: ios=39/512, merge=0/0, ticks=1551/196, in_queue=1747, util=98.63% 00:10:52.413 16:37:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:52.413 [global] 00:10:52.413 thread=1 00:10:52.413 invalidate=1 00:10:52.413 rw=randwrite 00:10:52.413 time_based=1 00:10:52.413 runtime=1 00:10:52.413 ioengine=libaio 00:10:52.413 direct=1 00:10:52.413 bs=4096 00:10:52.413 iodepth=1 00:10:52.413 norandommap=0 00:10:52.413 numjobs=1 00:10:52.413 00:10:52.413 verify_dump=1 00:10:52.413 verify_backlog=512 00:10:52.413 verify_state_save=0 00:10:52.413 do_verify=1 00:10:52.413 verify=crc32c-intel 00:10:52.413 [job0] 00:10:52.413 filename=/dev/nvme0n1 00:10:52.413 [job1] 00:10:52.413 filename=/dev/nvme0n2 00:10:52.413 [job2] 00:10:52.413 filename=/dev/nvme0n3 00:10:52.413 [job3] 00:10:52.413 filename=/dev/nvme0n4 00:10:52.413 Could not set queue depth (nvme0n1) 00:10:52.413 Could not set queue depth (nvme0n2) 00:10:52.413 Could not set queue depth (nvme0n3) 00:10:52.413 Could not set queue depth (nvme0n4) 00:10:52.673 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.673 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.673 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.673 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.673 fio-3.35 00:10:52.673 Starting 4 threads 00:10:54.054 00:10:54.054 job0: (groupid=0, jobs=1): err= 0: pid=2064830: Fri Dec 6 16:37:42 2024 00:10:54.054 read: IOPS=228, BW=916KiB/s (938kB/s)(944KiB/1031msec) 00:10:54.054 slat (nsec): min=10428, max=45408, avg=19355.59, stdev=5362.06 00:10:54.054 clat (usec): min=928, max=42273, avg=3025.44, stdev=8501.57 00:10:54.054 lat (usec): min=944, max=42292, avg=3044.80, stdev=8501.49 00:10:54.054 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 947], 5.00th=[ 1020], 10.00th=[ 1057], 20.00th=[ 1074], 00:10:54.054 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1156], 60.00th=[ 1188], 00:10:54.054 | 70.00th=[ 1205], 80.00th=[ 1237], 90.00th=[ 1287], 95.00th=[ 1401], 00:10:54.054 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:54.054 | 99.99th=[42206] 00:10:54.054 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:54.054 slat (nsec): min=3971, max=46880, avg=14856.82, stdev=8531.78 00:10:54.054 clat (usec): min=282, max=1025, avg=587.12, stdev=123.28 00:10:54.054 lat (usec): min=286, max=1038, avg=601.98, stdev=125.03 00:10:54.054 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 306], 5.00th=[ 359], 10.00th=[ 437], 20.00th=[ 486], 00:10:54.054 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 611], 00:10:54.054 | 70.00th=[ 652], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 775], 00:10:54.054 | 99.00th=[ 848], 99.50th=[ 906], 99.90th=[ 1029], 99.95th=[ 1029], 00:10:54.054 | 99.99th=[ 1029] 00:10:54.054 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.054 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.054 lat (usec) : 500=15.78%, 750=46.39%, 1000=7.35% 00:10:54.054 lat (msec) : 2=29.01%, 50=1.47% 00:10:54.054 cpu : usr=0.58%, sys=1.07%, ctx=748, majf=0, minf=1 00:10:54.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 issued rwts: total=236,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.054 job1: (groupid=0, jobs=1): err= 0: pid=2064831: Fri Dec 6 16:37:42 2024 00:10:54.054 read: IOPS=17, BW=71.6KiB/s (73.4kB/s)(72.0KiB/1005msec) 00:10:54.054 slat (nsec): min=10231, max=27560, avg=24278.72, stdev=6325.74 00:10:54.054 clat (usec): min=805, max=42079, avg=39434.63, stdev=9649.06 00:10:54.054 lat (usec): min=816, max=42106, avg=39458.90, stdev=9652.43 00:10:54.054 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 807], 5.00th=[ 807], 10.00th=[41157], 20.00th=[41157], 00:10:54.054 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:54.054 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:54.054 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:54.054 | 99.99th=[42206] 00:10:54.054 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:54.054 slat (nsec): min=3360, max=50669, avg=16879.66, stdev=8781.33 00:10:54.054 clat (usec): min=177, max=4339, avg=545.98, stdev=218.65 00:10:54.054 lat (usec): min=193, max=4372, avg=562.86, stdev=220.69 00:10:54.054 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 247], 5.00th=[ 310], 10.00th=[ 351], 20.00th=[ 408], 00:10:54.054 | 30.00th=[ 449], 40.00th=[ 498], 50.00th=[ 545], 60.00th=[ 578], 00:10:54.054 | 70.00th=[ 619], 80.00th=[ 668], 90.00th=[ 717], 95.00th=[ 775], 00:10:54.054 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 4359], 99.95th=[ 4359], 00:10:54.054 | 99.99th=[ 4359] 00:10:54.054 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.054 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.054 lat (usec) : 250=1.13%, 500=37.74%, 750=51.70%, 1000=6.04% 00:10:54.054 lat (msec) : 10=0.19%, 50=3.21% 00:10:54.054 cpu : usr=0.50%, sys=1.39%, ctx=531, majf=0, minf=1 00:10:54.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.054 job2: (groupid=0, jobs=1): err= 0: pid=2064832: Fri Dec 6 16:37:42 2024 00:10:54.054 read: IOPS=535, BW=2142KiB/s (2193kB/s)(2144KiB/1001msec) 00:10:54.054 slat (nsec): min=2921, max=45100, avg=19467.88, stdev=7694.97 00:10:54.054 clat (usec): min=440, max=1273, avg=930.47, stdev=91.21 00:10:54.054 lat (usec): min=453, max=1301, avg=949.94, stdev=92.39 00:10:54.054 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 594], 5.00th=[ 783], 10.00th=[ 824], 20.00th=[ 873], 00:10:54.054 | 30.00th=[ 898], 40.00th=[ 922], 50.00th=[ 938], 60.00th=[ 955], 00:10:54.054 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:10:54.054 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1270], 99.95th=[ 1270], 00:10:54.054 | 99.99th=[ 1270] 00:10:54.054 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:54.054 slat (nsec): min=3531, max=48851, avg=13693.81, stdev=6941.44 00:10:54.054 clat (usec): min=199, max=777, avg=456.43, stdev=102.53 00:10:54.054 lat (usec): min=213, max=792, avg=470.12, stdev=104.08 00:10:54.054 clat percentiles (usec): 00:10:54.054 | 1.00th=[ 265], 5.00th=[ 297], 10.00th=[ 318], 20.00th=[ 371], 00:10:54.054 | 30.00th=[ 404], 40.00th=[ 424], 50.00th=[ 449], 60.00th=[ 474], 00:10:54.054 | 70.00th=[ 502], 80.00th=[ 553], 90.00th=[ 603], 95.00th=[ 635], 00:10:54.054 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 758], 99.95th=[ 775], 00:10:54.054 | 99.99th=[ 775] 00:10:54.054 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=2 00:10:54.054 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:54.054 lat (usec) : 250=0.45%, 500=44.81%, 750=21.28%, 1000=27.05% 00:10:54.054 lat (msec) : 2=6.41% 00:10:54.054 cpu : usr=1.60%, sys=4.00%, ctx=1561, majf=0, minf=1 00:10:54.054 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.054 issued rwts: total=536,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.054 job3: (groupid=0, jobs=1): err= 0: pid=2064833: Fri Dec 6 16:37:42 2024 00:10:54.054 read: IOPS=792, BW=3169KiB/s (3245kB/s)(3172KiB/1001msec) 00:10:54.055 slat (nsec): min=3191, max=53963, avg=15316.69, stdev=7996.61 00:10:54.055 clat (usec): min=266, max=1014, avg=735.53, stdev=110.80 00:10:54.055 lat (usec): min=279, max=1022, avg=750.85, stdev=111.97 00:10:54.055 clat percentiles (usec): 00:10:54.055 | 1.00th=[ 416], 5.00th=[ 545], 10.00th=[ 586], 20.00th=[ 652], 00:10:54.055 | 30.00th=[ 693], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 775], 00:10:54.055 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 857], 95.00th=[ 889], 00:10:54.055 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1012], 99.95th=[ 1012], 00:10:54.055 | 99.99th=[ 1012] 00:10:54.055 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:54.055 slat (nsec): min=4114, max=41509, avg=14893.72, stdev=8038.75 00:10:54.055 clat (usec): min=101, max=751, avg=369.48, stdev=112.31 00:10:54.055 lat (usec): min=106, max=765, avg=384.37, stdev=113.83 00:10:54.055 clat percentiles (usec): 00:10:54.055 | 1.00th=[ 165], 5.00th=[ 204], 10.00th=[ 237], 20.00th=[ 273], 00:10:54.055 | 30.00th=[ 293], 40.00th=[ 318], 50.00th=[ 355], 60.00th=[ 396], 00:10:54.055 | 70.00th=[ 424], 80.00th=[ 469], 90.00th=[ 537], 95.00th=[ 570], 00:10:54.055 | 99.00th=[ 635], 99.50th=[ 652], 99.90th=[ 685], 99.95th=[ 750], 00:10:54.055 | 99.99th=[ 750] 00:10:54.055 bw ( KiB/s): min= 4096, max= 4096, per=34.37%, avg=4096.00, stdev= 0.00, samples=1 00:10:54.055 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:54.055 lat (usec) : 250=7.15%, 500=42.05%, 750=29.66%, 1000=21.08% 00:10:54.055 lat (msec) : 2=0.06% 00:10:54.055 cpu : usr=1.40%, sys=2.50%, ctx=1818, majf=0, minf=1 00:10:54.055 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.055 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.055 issued rwts: total=793,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.055 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.055 00:10:54.055 Run status group 0 (all jobs): 00:10:54.055 READ: bw=6142KiB/s (6289kB/s), 71.6KiB/s-3169KiB/s (73.4kB/s-3245kB/s), io=6332KiB (6484kB), run=1001-1031msec 00:10:54.055 WRITE: bw=11.6MiB/s (12.2MB/s), 1986KiB/s-4092KiB/s (2034kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1031msec 00:10:54.055 00:10:54.055 Disk stats (read/write): 00:10:54.055 nvme0n1: ios=206/512, merge=0/0, ticks=523/287, in_queue=810, util=87.98% 00:10:54.055 nvme0n2: ios=64/512, merge=0/0, ticks=1026/218, in_queue=1244, util=98.68% 00:10:54.055 nvme0n3: ios=570/790, merge=0/0, ticks=1114/276, in_queue=1390, util=98.65% 00:10:54.055 nvme0n4: ios=650/1024, merge=0/0, ticks=849/366, in_queue=1215, util=98.43% 00:10:54.055 16:37:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:54.055 [global] 00:10:54.055 thread=1 00:10:54.055 invalidate=1 00:10:54.055 rw=write 00:10:54.055 time_based=1 00:10:54.055 runtime=1 00:10:54.055 ioengine=libaio 00:10:54.055 direct=1 00:10:54.055 bs=4096 00:10:54.055 iodepth=128 00:10:54.055 norandommap=0 00:10:54.055 numjobs=1 00:10:54.055 00:10:54.055 verify_dump=1 00:10:54.055 verify_backlog=512 00:10:54.055 verify_state_save=0 00:10:54.055 do_verify=1 00:10:54.055 verify=crc32c-intel 00:10:54.055 [job0] 00:10:54.055 filename=/dev/nvme0n1 00:10:54.055 [job1] 00:10:54.055 filename=/dev/nvme0n2 00:10:54.055 [job2] 00:10:54.055 filename=/dev/nvme0n3 00:10:54.055 [job3] 00:10:54.055 filename=/dev/nvme0n4 00:10:54.055 Could not set queue depth (nvme0n1) 00:10:54.055 Could not set queue depth (nvme0n2) 00:10:54.055 Could not set queue depth (nvme0n3) 00:10:54.055 Could not set queue depth (nvme0n4) 00:10:54.055 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.055 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.055 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.055 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.055 fio-3.35 00:10:54.055 Starting 4 threads 00:10:55.438 00:10:55.438 job0: (groupid=0, jobs=1): err= 0: pid=2065349: Fri Dec 6 16:37:43 2024 00:10:55.438 read: IOPS=9650, BW=37.7MiB/s (39.5MB/s)(38.0MiB/1008msec) 00:10:55.438 slat (nsec): min=897, max=6233.5k, avg=52669.81, stdev=413779.00 00:10:55.438 clat (usec): min=2027, max=13455, avg=6994.51, stdev=1645.96 00:10:55.438 lat (usec): min=2029, max=15215, avg=7047.18, stdev=1679.91 00:10:55.438 clat percentiles (usec): 00:10:55.438 | 1.00th=[ 3589], 5.00th=[ 4948], 10.00th=[ 5276], 20.00th=[ 5735], 00:10:55.438 | 30.00th=[ 6259], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6915], 00:10:55.438 | 70.00th=[ 7242], 80.00th=[ 8029], 90.00th=[ 9241], 95.00th=[10683], 00:10:55.438 | 99.00th=[11863], 99.50th=[12256], 99.90th=[12780], 99.95th=[12911], 00:10:55.438 | 99.99th=[13435] 00:10:55.438 write: IOPS=9831, BW=38.4MiB/s (40.3MB/s)(38.7MiB/1008msec); 0 zone resets 00:10:55.438 slat (nsec): min=1560, max=5840.2k, avg=44330.88, stdev=276099.19 00:10:55.438 clat (usec): min=519, max=14672, avg=6042.53, stdev=1663.07 00:10:55.438 lat (usec): min=528, max=14674, avg=6086.86, stdev=1674.56 00:10:55.438 clat percentiles (usec): 00:10:55.438 | 1.00th=[ 2147], 5.00th=[ 3294], 10.00th=[ 3752], 20.00th=[ 4752], 00:10:55.438 | 30.00th=[ 5538], 40.00th=[ 5866], 50.00th=[ 6456], 60.00th=[ 6718], 00:10:55.438 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7701], 00:10:55.438 | 99.00th=[11994], 99.50th=[14484], 99.90th=[14615], 99.95th=[14615], 00:10:55.438 | 99.99th=[14615] 00:10:55.438 bw ( KiB/s): min=36352, max=41904, per=39.90%, avg=39128.00, stdev=3925.86, samples=2 00:10:55.438 iops : min= 9088, max=10476, avg=9782.00, stdev=981.46, samples=2 00:10:55.438 lat (usec) : 750=0.02% 00:10:55.438 lat (msec) : 2=0.32%, 4=6.83%, 10=88.80%, 20=4.04% 00:10:55.438 cpu : usr=4.07%, sys=7.15%, ctx=935, majf=0, minf=1 00:10:55.438 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:55.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.438 issued rwts: total=9728,9910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.438 job1: (groupid=0, jobs=1): err= 0: pid=2065350: Fri Dec 6 16:37:43 2024 00:10:55.438 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:55.438 slat (nsec): min=892, max=30812k, avg=131334.46, stdev=942349.72 00:10:55.438 clat (usec): min=5233, max=71715, avg=17598.44, stdev=12929.54 00:10:55.438 lat (usec): min=5235, max=74645, avg=17729.78, stdev=12972.46 00:10:55.438 clat percentiles (usec): 00:10:55.438 | 1.00th=[ 5800], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 9372], 00:10:55.438 | 30.00th=[10683], 40.00th=[14353], 50.00th=[14877], 60.00th=[15795], 00:10:55.438 | 70.00th=[17957], 80.00th=[19530], 90.00th=[29492], 95.00th=[58983], 00:10:55.438 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:10:55.438 | 99.99th=[71828] 00:10:55.438 write: IOPS=3128, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1004msec); 0 zone resets 00:10:55.438 slat (nsec): min=1546, max=51128k, avg=186100.13, stdev=1455646.08 00:10:55.438 clat (usec): min=2800, max=74640, avg=23202.18, stdev=14897.58 00:10:55.438 lat (usec): min=3340, max=74642, avg=23388.28, stdev=14957.57 00:10:55.438 clat percentiles (usec): 00:10:55.438 | 1.00th=[ 5014], 5.00th=[ 8979], 10.00th=[ 9765], 20.00th=[10683], 00:10:55.438 | 30.00th=[12125], 40.00th=[14484], 50.00th=[16450], 60.00th=[21103], 00:10:55.438 | 70.00th=[30278], 80.00th=[38536], 90.00th=[43779], 95.00th=[50594], 00:10:55.438 | 99.00th=[66847], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:55.438 | 99.99th=[74974] 00:10:55.438 bw ( KiB/s): min= 9208, max=15368, per=12.53%, avg=12288.00, stdev=4355.78, samples=2 00:10:55.438 iops : min= 2302, max= 3842, avg=3072.00, stdev=1088.94, samples=2 00:10:55.438 lat (msec) : 4=0.39%, 10=18.19%, 20=51.30%, 50=24.56%, 100=5.57% 00:10:55.439 cpu : usr=1.79%, sys=2.19%, ctx=244, majf=0, minf=1 00:10:55.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:55.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.439 issued rwts: total=3072,3141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.439 job2: (groupid=0, jobs=1): err= 0: pid=2065351: Fri Dec 6 16:37:43 2024 00:10:55.439 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:10:55.439 slat (nsec): min=958, max=43517k, avg=91262.99, stdev=812529.72 00:10:55.439 clat (usec): min=2935, max=54976, avg=11318.34, stdev=6944.75 00:10:55.439 lat (usec): min=2938, max=54978, avg=11409.60, stdev=6980.41 00:10:55.439 clat percentiles (usec): 00:10:55.439 | 1.00th=[ 4752], 5.00th=[ 6783], 10.00th=[ 7635], 20.00th=[ 8225], 00:10:55.439 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9896], 00:10:55.439 | 70.00th=[10683], 80.00th=[12911], 90.00th=[15139], 95.00th=[20317], 00:10:55.439 | 99.00th=[50594], 99.50th=[53216], 99.90th=[53216], 99.95th=[54789], 00:10:55.439 | 99.99th=[54789] 00:10:55.439 write: IOPS=5988, BW=23.4MiB/s (24.5MB/s)(23.6MiB/1007msec); 0 zone resets 00:10:55.439 slat (nsec): min=1713, max=7608.9k, avg=76751.09, stdev=436018.21 00:10:55.439 clat (usec): min=1936, max=27496, avg=10530.89, stdev=4439.73 00:10:55.439 lat (usec): min=1940, max=27505, avg=10607.64, stdev=4474.37 00:10:55.439 clat percentiles (usec): 00:10:55.439 | 1.00th=[ 3228], 5.00th=[ 5080], 10.00th=[ 6128], 20.00th=[ 7308], 00:10:55.439 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8717], 60.00th=[ 9896], 00:10:55.439 | 70.00th=[12780], 80.00th=[14615], 90.00th=[18220], 95.00th=[19268], 00:10:55.439 | 99.00th=[20579], 99.50th=[21103], 99.90th=[21890], 99.95th=[26346], 00:10:55.439 | 99.99th=[27395] 00:10:55.439 bw ( KiB/s): min=20480, max=26744, per=24.08%, avg=23612.00, stdev=4429.32, samples=2 00:10:55.439 iops : min= 5120, max= 6686, avg=5903.00, stdev=1107.33, samples=2 00:10:55.439 lat (msec) : 2=0.05%, 4=1.51%, 10=59.60%, 20=35.45%, 50=2.46% 00:10:55.439 lat (msec) : 100=0.93% 00:10:55.439 cpu : usr=4.37%, sys=3.28%, ctx=575, majf=0, minf=1 00:10:55.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:55.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.439 issued rwts: total=5632,6030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.439 job3: (groupid=0, jobs=1): err= 0: pid=2065352: Fri Dec 6 16:37:43 2024 00:10:55.439 read: IOPS=5257, BW=20.5MiB/s (21.5MB/s)(20.7MiB/1006msec) 00:10:55.439 slat (nsec): min=928, max=46142k, avg=108413.59, stdev=864256.91 00:10:55.439 clat (usec): min=2447, max=69911, avg=12880.78, stdev=10360.80 00:10:55.439 lat (usec): min=6090, max=69921, avg=12989.19, stdev=10429.86 00:10:55.439 clat percentiles (usec): 00:10:55.439 | 1.00th=[ 7111], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 8848], 00:10:55.439 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:10:55.439 | 70.00th=[10028], 80.00th=[12256], 90.00th=[19530], 95.00th=[26346], 00:10:55.439 | 99.00th=[62129], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:10:55.439 | 99.99th=[69731] 00:10:55.439 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:10:55.439 slat (nsec): min=1574, max=7253.1k, avg=72431.10, stdev=335845.45 00:10:55.439 clat (usec): min=5710, max=62526, avg=10418.63, stdev=6440.24 00:10:55.439 lat (usec): min=5713, max=62541, avg=10491.06, stdev=6449.52 00:10:55.439 clat percentiles (usec): 00:10:55.439 | 1.00th=[ 6456], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 7963], 00:10:55.439 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:10:55.439 | 70.00th=[ 9241], 80.00th=[11731], 90.00th=[14746], 95.00th=[16581], 00:10:55.439 | 99.00th=[51643], 99.50th=[58983], 99.90th=[62653], 99.95th=[62653], 00:10:55.439 | 99.99th=[62653] 00:10:55.439 bw ( KiB/s): min=16384, max=28672, per=22.97%, avg=22528.00, stdev=8688.93, samples=2 00:10:55.439 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:10:55.439 lat (msec) : 4=0.01%, 10=72.48%, 20=21.41%, 50=4.07%, 100=2.03% 00:10:55.439 cpu : usr=2.59%, sys=3.68%, ctx=794, majf=0, minf=1 00:10:55.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:55.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:55.439 issued rwts: total=5289,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:55.439 00:10:55.439 Run status group 0 (all jobs): 00:10:55.439 READ: bw=91.9MiB/s (96.4MB/s), 12.0MiB/s-37.7MiB/s (12.5MB/s-39.5MB/s), io=92.7MiB (97.2MB), run=1004-1008msec 00:10:55.439 WRITE: bw=95.8MiB/s (100MB/s), 12.2MiB/s-38.4MiB/s (12.8MB/s-40.3MB/s), io=96.5MiB (101MB), run=1004-1008msec 00:10:55.439 00:10:55.439 Disk stats (read/write): 00:10:55.439 nvme0n1: ios=8242/8666, merge=0/0, ticks=54196/48971, in_queue=103167, util=92.48% 00:10:55.439 nvme0n2: ios=2540/2560, merge=0/0, ticks=13286/13427, in_queue=26713, util=89.95% 00:10:55.439 nvme0n3: ios=4661/4967, merge=0/0, ticks=40483/44046, in_queue=84529, util=99.90% 00:10:55.439 nvme0n4: ios=4543/4608, merge=0/0, ticks=18673/13089, in_queue=31762, util=93.91% 00:10:55.439 16:37:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:55.439 [global] 00:10:55.439 thread=1 00:10:55.439 invalidate=1 00:10:55.439 rw=randwrite 00:10:55.439 time_based=1 00:10:55.439 runtime=1 00:10:55.439 ioengine=libaio 00:10:55.439 direct=1 00:10:55.439 bs=4096 00:10:55.439 iodepth=128 00:10:55.439 norandommap=0 00:10:55.439 numjobs=1 00:10:55.439 00:10:55.439 verify_dump=1 00:10:55.439 verify_backlog=512 00:10:55.439 verify_state_save=0 00:10:55.439 do_verify=1 00:10:55.439 verify=crc32c-intel 00:10:55.439 [job0] 00:10:55.439 filename=/dev/nvme0n1 00:10:55.439 [job1] 00:10:55.439 filename=/dev/nvme0n2 00:10:55.439 [job2] 00:10:55.439 filename=/dev/nvme0n3 00:10:55.439 [job3] 00:10:55.439 filename=/dev/nvme0n4 00:10:55.439 Could not set queue depth (nvme0n1) 00:10:55.439 Could not set queue depth (nvme0n2) 00:10:55.439 Could not set queue depth (nvme0n3) 00:10:55.439 Could not set queue depth (nvme0n4) 00:10:55.699 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.699 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.699 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.699 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.699 fio-3.35 00:10:55.699 Starting 4 threads 00:10:57.109 00:10:57.109 job0: (groupid=0, jobs=1): err= 0: pid=2065880: Fri Dec 6 16:37:45 2024 00:10:57.109 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:10:57.109 slat (nsec): min=885, max=13946k, avg=80669.89, stdev=538761.55 00:10:57.109 clat (usec): min=4748, max=43150, avg=10000.48, stdev=6939.38 00:10:57.109 lat (usec): min=4759, max=43170, avg=10081.15, stdev=6997.16 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 6718], 20.00th=[ 7177], 00:10:57.109 | 30.00th=[ 7308], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7701], 00:10:57.109 | 70.00th=[ 7832], 80.00th=[ 8717], 90.00th=[21890], 95.00th=[29230], 00:10:57.109 | 99.00th=[35914], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:10:57.109 | 99.99th=[43254] 00:10:57.109 write: IOPS=6586, BW=25.7MiB/s (27.0MB/s)(25.9MiB/1008msec); 0 zone resets 00:10:57.109 slat (nsec): min=1478, max=10936k, avg=72767.52, stdev=390923.54 00:10:57.109 clat (usec): min=3932, max=62250, avg=9909.41, stdev=8972.23 00:10:57.109 lat (usec): min=3941, max=62261, avg=9982.18, stdev=9030.76 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 4490], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 6915], 00:10:57.109 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7373], 00:10:57.109 | 70.00th=[ 7570], 80.00th=[ 8455], 90.00th=[12649], 95.00th=[31065], 00:10:57.109 | 99.00th=[54789], 99.50th=[56886], 99.90th=[62129], 99.95th=[62129], 00:10:57.109 | 99.99th=[62129] 00:10:57.109 bw ( KiB/s): min=22464, max=29624, per=26.37%, avg=26044.00, stdev=5062.88, samples=2 00:10:57.109 iops : min= 5616, max= 7406, avg=6511.00, stdev=1265.72, samples=2 00:10:57.109 lat (msec) : 4=0.05%, 10=85.19%, 20=6.27%, 50=7.76%, 100=0.72% 00:10:57.109 cpu : usr=2.88%, sys=5.26%, ctx=853, majf=0, minf=1 00:10:57.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:57.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.109 issued rwts: total=6144,6639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.109 job1: (groupid=0, jobs=1): err= 0: pid=2065881: Fri Dec 6 16:37:45 2024 00:10:57.109 read: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec) 00:10:57.109 slat (nsec): min=892, max=7067.6k, avg=67414.64, stdev=474591.02 00:10:57.109 clat (usec): min=2440, max=22277, avg=8827.90, stdev=3155.84 00:10:57.109 lat (usec): min=2447, max=22304, avg=8895.32, stdev=3194.77 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 3392], 5.00th=[ 5932], 10.00th=[ 6259], 20.00th=[ 6652], 00:10:57.109 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7898], 00:10:57.109 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[13960], 95.00th=[16057], 00:10:57.109 | 99.00th=[17695], 99.50th=[18744], 99.90th=[20317], 99.95th=[21365], 00:10:57.109 | 99.99th=[22152] 00:10:57.109 write: IOPS=8061, BW=31.5MiB/s (33.0MB/s)(31.6MiB/1005msec); 0 zone resets 00:10:57.109 slat (nsec): min=1502, max=5829.1k, avg=52234.14, stdev=315872.25 00:10:57.109 clat (usec): min=853, max=21838, avg=7380.69, stdev=3177.83 00:10:57.109 lat (usec): min=860, max=21847, avg=7432.92, stdev=3200.76 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 2212], 5.00th=[ 3326], 10.00th=[ 4113], 20.00th=[ 5276], 00:10:57.109 | 30.00th=[ 6325], 40.00th=[ 6587], 50.00th=[ 6783], 60.00th=[ 6980], 00:10:57.109 | 70.00th=[ 7242], 80.00th=[ 8586], 90.00th=[12125], 95.00th=[13435], 00:10:57.109 | 99.00th=[19006], 99.50th=[20841], 99.90th=[21627], 99.95th=[21890], 00:10:57.109 | 99.99th=[21890] 00:10:57.109 bw ( KiB/s): min=30336, max=33456, per=32.29%, avg=31896.00, stdev=2206.17, samples=2 00:10:57.109 iops : min= 7584, max= 8364, avg=7974.00, stdev=551.54, samples=2 00:10:57.109 lat (usec) : 1000=0.02% 00:10:57.109 lat (msec) : 2=0.33%, 4=4.73%, 10=72.16%, 20=22.31%, 50=0.45% 00:10:57.109 cpu : usr=3.49%, sys=5.78%, ctx=747, majf=0, minf=1 00:10:57.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:57.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.109 issued rwts: total=7680,8102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.109 job2: (groupid=0, jobs=1): err= 0: pid=2065882: Fri Dec 6 16:37:45 2024 00:10:57.109 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:57.109 slat (nsec): min=928, max=9827.4k, avg=112743.87, stdev=644608.56 00:10:57.109 clat (usec): min=6455, max=30434, avg=13230.30, stdev=3586.94 00:10:57.109 lat (usec): min=6457, max=30436, avg=13343.04, stdev=3647.98 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 8094], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10552], 00:10:57.109 | 30.00th=[10814], 40.00th=[11863], 50.00th=[12649], 60.00th=[13435], 00:10:57.109 | 70.00th=[14746], 80.00th=[15795], 90.00th=[18220], 95.00th=[19268], 00:10:57.109 | 99.00th=[26084], 99.50th=[27132], 99.90th=[30540], 99.95th=[30540], 00:10:57.109 | 99.99th=[30540] 00:10:57.109 write: IOPS=4500, BW=17.6MiB/s (18.4MB/s)(17.7MiB/1004msec); 0 zone resets 00:10:57.109 slat (nsec): min=1524, max=4907.7k, avg=115395.09, stdev=400364.94 00:10:57.109 clat (usec): min=3519, max=36571, avg=16174.22, stdev=4847.36 00:10:57.109 lat (usec): min=3897, max=36577, avg=16289.61, stdev=4870.07 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 5407], 5.00th=[ 9634], 10.00th=[11469], 20.00th=[12256], 00:10:57.109 | 30.00th=[13042], 40.00th=[14091], 50.00th=[15795], 60.00th=[16712], 00:10:57.109 | 70.00th=[18482], 80.00th=[19268], 90.00th=[22938], 95.00th=[25035], 00:10:57.109 | 99.00th=[30278], 99.50th=[33424], 99.90th=[36439], 99.95th=[36439], 00:10:57.109 | 99.99th=[36439] 00:10:57.109 bw ( KiB/s): min=17208, max=17928, per=17.79%, avg=17568.00, stdev=509.12, samples=2 00:10:57.109 iops : min= 4302, max= 4482, avg=4392.00, stdev=127.28, samples=2 00:10:57.109 lat (msec) : 4=0.10%, 10=10.15%, 20=78.83%, 50=10.92% 00:10:57.109 cpu : usr=2.19%, sys=3.79%, ctx=699, majf=0, minf=1 00:10:57.109 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:57.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.109 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.109 issued rwts: total=4096,4519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.109 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.109 job3: (groupid=0, jobs=1): err= 0: pid=2065883: Fri Dec 6 16:37:45 2024 00:10:57.109 read: IOPS=5455, BW=21.3MiB/s (22.3MB/s)(21.5MiB/1008msec) 00:10:57.109 slat (nsec): min=1009, max=7238.6k, avg=93494.01, stdev=584453.13 00:10:57.109 clat (usec): min=2081, max=28052, avg=11867.67, stdev=4217.35 00:10:57.109 lat (usec): min=3478, max=28076, avg=11961.17, stdev=4269.88 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 4490], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7963], 00:10:57.109 | 30.00th=[ 8356], 40.00th=[ 9896], 50.00th=[11731], 60.00th=[13698], 00:10:57.109 | 70.00th=[14091], 80.00th=[15139], 90.00th=[17695], 95.00th=[19792], 00:10:57.109 | 99.00th=[22152], 99.50th=[22676], 99.90th=[25035], 99.95th=[25297], 00:10:57.109 | 99.99th=[28181] 00:10:57.109 write: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec); 0 zone resets 00:10:57.109 slat (nsec): min=1544, max=8496.4k, avg=82183.58, stdev=538161.43 00:10:57.109 clat (usec): min=878, max=26326, avg=11109.66, stdev=4910.76 00:10:57.109 lat (usec): min=886, max=26328, avg=11191.84, stdev=4957.71 00:10:57.109 clat percentiles (usec): 00:10:57.109 | 1.00th=[ 3949], 5.00th=[ 4883], 10.00th=[ 6128], 20.00th=[ 6521], 00:10:57.110 | 30.00th=[ 7373], 40.00th=[ 8455], 50.00th=[10552], 60.00th=[11863], 00:10:57.110 | 70.00th=[13435], 80.00th=[15139], 90.00th=[17957], 95.00th=[20841], 00:10:57.110 | 99.00th=[24511], 99.50th=[25035], 99.90th=[25560], 99.95th=[25560], 00:10:57.110 | 99.99th=[26346] 00:10:57.110 bw ( KiB/s): min=20480, max=24576, per=22.81%, avg=22528.00, stdev=2896.31, samples=2 00:10:57.110 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:57.110 lat (usec) : 1000=0.02% 00:10:57.110 lat (msec) : 2=0.06%, 4=0.86%, 10=42.77%, 20=50.98%, 50=5.30% 00:10:57.110 cpu : usr=3.38%, sys=3.67%, ctx=384, majf=0, minf=2 00:10:57.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:57.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.110 issued rwts: total=5499,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.110 00:10:57.110 Run status group 0 (all jobs): 00:10:57.110 READ: bw=90.8MiB/s (95.2MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.3MB/s), io=91.5MiB (95.9MB), run=1004-1008msec 00:10:57.110 WRITE: bw=96.5MiB/s (101MB/s), 17.6MiB/s-31.5MiB/s (18.4MB/s-33.0MB/s), io=97.2MiB (102MB), run=1004-1008msec 00:10:57.110 00:10:57.110 Disk stats (read/write): 00:10:57.110 nvme0n1: ios=5385/5632, merge=0/0, ticks=27813/23276, in_queue=51089, util=85.17% 00:10:57.110 nvme0n2: ios=6461/6656, merge=0/0, ticks=42050/37942, in_queue=79992, util=87.36% 00:10:57.110 nvme0n3: ios=3640/3639, merge=0/0, ticks=23917/27551, in_queue=51468, util=91.98% 00:10:57.110 nvme0n4: ios=4339/4608, merge=0/0, ticks=28373/28570, in_queue=56943, util=95.62% 00:10:57.110 16:37:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:57.110 16:37:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2066188 00:10:57.110 16:37:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:57.110 16:37:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:57.110 [global] 00:10:57.110 thread=1 00:10:57.110 invalidate=1 00:10:57.110 rw=read 00:10:57.110 time_based=1 00:10:57.110 runtime=10 00:10:57.110 ioengine=libaio 00:10:57.110 direct=1 00:10:57.110 bs=4096 00:10:57.110 iodepth=1 00:10:57.110 norandommap=1 00:10:57.110 numjobs=1 00:10:57.110 00:10:57.110 [job0] 00:10:57.110 filename=/dev/nvme0n1 00:10:57.110 [job1] 00:10:57.110 filename=/dev/nvme0n2 00:10:57.110 [job2] 00:10:57.110 filename=/dev/nvme0n3 00:10:57.110 [job3] 00:10:57.110 filename=/dev/nvme0n4 00:10:57.110 Could not set queue depth (nvme0n1) 00:10:57.110 Could not set queue depth (nvme0n2) 00:10:57.110 Could not set queue depth (nvme0n3) 00:10:57.110 Could not set queue depth (nvme0n4) 00:10:57.369 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.369 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.369 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.369 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.369 fio-3.35 00:10:57.369 Starting 4 threads 00:10:59.906 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:00.165 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:00.165 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=11001856, buflen=4096 00:11:00.165 fio: pid=2066402, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.165 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.165 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:00.423 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=286720, buflen=4096 00:11:00.423 fio: pid=2066401, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.423 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=16850944, buflen=4096 00:11:00.423 fio: pid=2066399, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.423 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.423 16:37:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:00.682 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.683 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:00.683 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=315392, buflen=4096 00:11:00.683 fio: pid=2066400, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.683 00:11:00.683 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066399: Fri Dec 6 16:37:49 2024 00:11:00.683 read: IOPS=1385, BW=5541KiB/s (5674kB/s)(16.1MiB/2970msec) 00:11:00.683 slat (usec): min=3, max=7304, avg=18.60, stdev=160.66 00:11:00.683 clat (usec): min=156, max=4772, avg=696.27, stdev=151.63 00:11:00.683 lat (usec): min=164, max=8015, avg=714.87, stdev=222.52 00:11:00.683 clat percentiles (usec): 00:11:00.683 | 1.00th=[ 383], 5.00th=[ 482], 10.00th=[ 537], 20.00th=[ 603], 00:11:00.683 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 717], 00:11:00.683 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 865], 95.00th=[ 955], 00:11:00.683 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1352], 00:11:00.683 | 99.99th=[ 4752] 00:11:00.683 bw ( KiB/s): min= 5136, max= 5800, per=62.09%, avg=5491.20, stdev=261.54, samples=5 00:11:00.683 iops : min= 1284, max= 1450, avg=1372.80, stdev=65.39, samples=5 00:11:00.683 lat (usec) : 250=0.15%, 500=6.44%, 750=65.81%, 1000=24.69% 00:11:00.683 lat (msec) : 2=2.84%, 4=0.02%, 10=0.02% 00:11:00.683 cpu : usr=0.84%, sys=2.32%, ctx=4120, majf=0, minf=1 00:11:00.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 issued rwts: total=4115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.683 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066400: Fri Dec 6 16:37:49 2024 00:11:00.683 read: IOPS=24, BW=98.0KiB/s (100kB/s)(308KiB/3142msec) 00:11:00.683 slat (usec): min=11, max=1651, avg=51.46, stdev=186.60 00:11:00.683 clat (usec): min=874, max=41968, avg=40469.79, stdev=4574.37 00:11:00.683 lat (usec): min=911, max=42966, avg=40521.57, stdev=4582.03 00:11:00.683 clat percentiles (usec): 00:11:00.683 | 1.00th=[ 873], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:00.683 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.683 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:00.683 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.683 | 99.99th=[42206] 00:11:00.683 bw ( KiB/s): min= 96, max= 104, per=1.11%, avg=98.67, stdev= 4.13, samples=6 00:11:00.683 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:11:00.683 lat (usec) : 1000=1.28% 00:11:00.683 lat (msec) : 50=97.44% 00:11:00.683 cpu : usr=0.10%, sys=0.00%, ctx=81, majf=0, minf=2 00:11:00.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.683 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066401: Fri Dec 6 16:37:49 2024 00:11:00.683 read: IOPS=24, BW=97.9KiB/s (100kB/s)(280KiB/2859msec) 00:11:00.683 slat (usec): min=11, max=300, avg=29.86, stdev=32.70 00:11:00.683 clat (usec): min=853, max=41960, avg=40429.27, stdev=4801.58 00:11:00.683 lat (usec): min=888, max=41986, avg=40459.17, stdev=4801.60 00:11:00.683 clat percentiles (usec): 00:11:00.683 | 1.00th=[ 857], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:00.683 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:00.683 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:00.683 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:00.683 | 99.99th=[42206] 00:11:00.683 bw ( KiB/s): min= 96, max= 104, per=1.12%, avg=99.20, stdev= 4.38, samples=5 00:11:00.683 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:00.683 lat (usec) : 1000=1.41% 00:11:00.683 lat (msec) : 50=97.18% 00:11:00.683 cpu : usr=0.10%, sys=0.00%, ctx=72, majf=0, minf=2 00:11:00.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.683 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2066402: Fri Dec 6 16:37:49 2024 00:11:00.683 read: IOPS=1008, BW=4033KiB/s (4130kB/s)(10.5MiB/2664msec) 00:11:00.683 slat (nsec): min=3105, max=66115, avg=17668.26, stdev=7262.20 00:11:00.683 clat (usec): min=445, max=42049, avg=961.63, stdev=1440.12 00:11:00.683 lat (usec): min=449, max=42075, avg=979.30, stdev=1440.46 00:11:00.683 clat percentiles (usec): 00:11:00.683 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 832], 00:11:00.683 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[ 930], 60.00th=[ 947], 00:11:00.683 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1045], 00:11:00.683 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[41157], 99.95th=[41681], 00:11:00.683 | 99.99th=[42206] 00:11:00.683 bw ( KiB/s): min= 3112, max= 4632, per=46.00%, avg=4068.80, stdev=569.54, samples=5 00:11:00.683 iops : min= 778, max= 1158, avg=1017.20, stdev=142.38, samples=5 00:11:00.683 lat (usec) : 500=0.11%, 750=8.19%, 1000=76.11% 00:11:00.683 lat (msec) : 2=15.41%, 50=0.15% 00:11:00.683 cpu : usr=0.64%, sys=2.03%, ctx=2689, majf=0, minf=2 00:11:00.683 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.683 issued rwts: total=2687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.683 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.683 00:11:00.683 Run status group 0 (all jobs): 00:11:00.683 READ: bw=8844KiB/s (9056kB/s), 97.9KiB/s-5541KiB/s (100kB/s-5674kB/s), io=27.1MiB (28.5MB), run=2664-3142msec 00:11:00.683 00:11:00.683 Disk stats (read/write): 00:11:00.683 nvme0n1: ios=3958/0, merge=0/0, ticks=3413/0, in_queue=3413, util=98.66% 00:11:00.683 nvme0n2: ios=76/0, merge=0/0, ticks=3078/0, in_queue=3078, util=95.63% 00:11:00.683 nvme0n3: ios=70/0, merge=0/0, ticks=2833/0, in_queue=2833, util=96.37% 00:11:00.683 nvme0n4: ios=2629/0, merge=0/0, ticks=2453/0, in_queue=2453, util=96.42% 00:11:00.683 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.683 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:00.942 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.942 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:00.942 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:00.942 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:01.202 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.202 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:01.462 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:01.462 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2066188 00:11:01.462 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:01.462 16:37:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:01.462 nvmf hotplug test: fio failed as expected 00:11:01.462 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.722 rmmod nvme_tcp 00:11:01.722 rmmod nvme_fabrics 00:11:01.722 rmmod nvme_keyring 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2062381 ']' 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2062381 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2062381 ']' 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2062381 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:01.722 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2062381 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2062381' 00:11:01.723 killing process with pid 2062381 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2062381 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2062381 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:01.723 16:37:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.394 00:11:04.394 real 0m25.741s 00:11:04.394 user 2m16.747s 00:11:04.394 sys 0m6.955s 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.394 ************************************ 00:11:04.394 END TEST nvmf_fio_target 00:11:04.394 ************************************ 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.394 ************************************ 00:11:04.394 START TEST nvmf_bdevio 00:11:04.394 ************************************ 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:04.394 * Looking for test storage... 00:11:04.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.394 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.395 --rc genhtml_branch_coverage=1 00:11:04.395 --rc genhtml_function_coverage=1 00:11:04.395 --rc genhtml_legend=1 00:11:04.395 --rc geninfo_all_blocks=1 00:11:04.395 --rc geninfo_unexecuted_blocks=1 00:11:04.395 00:11:04.395 ' 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.395 --rc genhtml_branch_coverage=1 00:11:04.395 --rc genhtml_function_coverage=1 00:11:04.395 --rc genhtml_legend=1 00:11:04.395 --rc geninfo_all_blocks=1 00:11:04.395 --rc geninfo_unexecuted_blocks=1 00:11:04.395 00:11:04.395 ' 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.395 --rc genhtml_branch_coverage=1 00:11:04.395 --rc genhtml_function_coverage=1 00:11:04.395 --rc genhtml_legend=1 00:11:04.395 --rc geninfo_all_blocks=1 00:11:04.395 --rc geninfo_unexecuted_blocks=1 00:11:04.395 00:11:04.395 ' 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.395 --rc genhtml_branch_coverage=1 00:11:04.395 --rc genhtml_function_coverage=1 00:11:04.395 --rc genhtml_legend=1 00:11:04.395 --rc geninfo_all_blocks=1 00:11:04.395 --rc geninfo_unexecuted_blocks=1 00:11:04.395 00:11:04.395 ' 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.395 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.396 16:37:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:09.674 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:09.674 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.674 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:09.675 Found net devices under 0000:31:00.0: cvl_0_0 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:09.675 Found net devices under 0000:31:00.1: cvl_0_1 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:09.675 16:37:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:09.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:11:09.675 00:11:09.675 --- 10.0.0.2 ping statistics --- 00:11:09.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.675 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:11:09.675 00:11:09.675 --- 10.0.0.1 ping statistics --- 00:11:09.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.675 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2071773 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2071773 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2071773 ']' 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:09.675 [2024-12-06 16:37:58.091209] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:09.675 [2024-12-06 16:37:58.091262] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.675 [2024-12-06 16:37:58.161158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.675 [2024-12-06 16:37:58.177053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.675 [2024-12-06 16:37:58.177081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.675 [2024-12-06 16:37:58.177087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.675 [2024-12-06 16:37:58.177091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.675 [2024-12-06 16:37:58.177095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.675 [2024-12-06 16:37:58.178316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:09.675 [2024-12-06 16:37:58.178469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:09.675 [2024-12-06 16:37:58.178580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.675 [2024-12-06 16:37:58.178582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.675 [2024-12-06 16:37:58.275294] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.675 Malloc0 00:11:09.675 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.676 [2024-12-06 16:37:58.323380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:09.676 { 00:11:09.676 "params": { 00:11:09.676 "name": "Nvme$subsystem", 00:11:09.676 "trtype": "$TEST_TRANSPORT", 00:11:09.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.676 "adrfam": "ipv4", 00:11:09.676 "trsvcid": "$NVMF_PORT", 00:11:09.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.676 "hdgst": ${hdgst:-false}, 00:11:09.676 "ddgst": ${ddgst:-false} 00:11:09.676 }, 00:11:09.676 "method": "bdev_nvme_attach_controller" 00:11:09.676 } 00:11:09.676 EOF 00:11:09.676 )") 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:09.676 16:37:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:09.676 "params": { 00:11:09.676 "name": "Nvme1", 00:11:09.676 "trtype": "tcp", 00:11:09.676 "traddr": "10.0.0.2", 00:11:09.676 "adrfam": "ipv4", 00:11:09.676 "trsvcid": "4420", 00:11:09.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.676 "hdgst": false, 00:11:09.676 "ddgst": false 00:11:09.676 }, 00:11:09.676 "method": "bdev_nvme_attach_controller" 00:11:09.676 }' 00:11:09.676 [2024-12-06 16:37:58.358966] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:09.676 [2024-12-06 16:37:58.359013] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071803 ] 00:11:09.934 [2024-12-06 16:37:58.436697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.934 [2024-12-06 16:37:58.457300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.934 [2024-12-06 16:37:58.457457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.934 [2024-12-06 16:37:58.457457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.193 I/O targets: 00:11:10.193 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:10.193 00:11:10.193 00:11:10.193 CUnit - A unit testing framework for C - Version 2.1-3 00:11:10.193 http://cunit.sourceforge.net/ 00:11:10.193 00:11:10.193 00:11:10.193 Suite: bdevio tests on: Nvme1n1 00:11:10.193 Test: blockdev write read block ...passed 00:11:10.193 Test: blockdev write zeroes read block ...passed 00:11:10.193 Test: blockdev write zeroes read no split ...passed 00:11:10.193 Test: blockdev write zeroes read split ...passed 00:11:10.193 Test: blockdev write zeroes read split partial ...passed 00:11:10.193 Test: blockdev reset ...[2024-12-06 16:37:58.869269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:10.193 [2024-12-06 16:37:58.869334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10e3230 (9): Bad file descriptor 00:11:10.193 [2024-12-06 16:37:58.885345] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:10.460 passed 00:11:10.460 Test: blockdev write read 8 blocks ...passed 00:11:10.460 Test: blockdev write read size > 128k ...passed 00:11:10.460 Test: blockdev write read invalid size ...passed 00:11:10.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.460 Test: blockdev write read max offset ...passed 00:11:10.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.460 Test: blockdev writev readv 8 blocks ...passed 00:11:10.460 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.460 Test: blockdev writev readv block ...passed 00:11:10.460 Test: blockdev writev readv size > 128k ...passed 00:11:10.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.460 Test: blockdev comparev and writev ...[2024-12-06 16:37:59.108651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.460 [2024-12-06 16:37:59.108681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:10.460 [2024-12-06 16:37:59.108693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.460 [2024-12-06 16:37:59.108699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:10.460 [2024-12-06 16:37:59.109170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.460 [2024-12-06 16:37:59.109180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:10.460 [2024-12-06 16:37:59.109190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.460 [2024-12-06 16:37:59.109195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:10.460 [2024-12-06 16:37:59.109618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.460 [2024-12-06 16:37:59.109627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:10.460 [2024-12-06 16:37:59.109636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.460 [2024-12-06 16:37:59.109642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:10.460 [2024-12-06 16:37:59.110069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.460 [2024-12-06 16:37:59.110078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:10.460 [2024-12-06 16:37:59.110088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:10.461 [2024-12-06 16:37:59.110093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:10.461 passed 00:11:10.719 Test: blockdev nvme passthru rw ...passed 00:11:10.719 Test: blockdev nvme passthru vendor specific ...[2024-12-06 16:37:59.193857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.719 [2024-12-06 16:37:59.193869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:10.719 [2024-12-06 16:37:59.194186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.719 [2024-12-06 16:37:59.194195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:10.719 [2024-12-06 16:37:59.194505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.719 [2024-12-06 16:37:59.194513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:10.719 [2024-12-06 16:37:59.194834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:10.719 [2024-12-06 16:37:59.194843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:10.719 passed 00:11:10.719 Test: blockdev nvme admin passthru ...passed 00:11:10.719 Test: blockdev copy ...passed 00:11:10.719 00:11:10.719 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.719 suites 1 1 n/a 0 0 00:11:10.719 tests 23 23 23 0 0 00:11:10.719 asserts 152 152 152 0 n/a 00:11:10.719 00:11:10.719 Elapsed time = 0.996 seconds 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.719 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.719 rmmod nvme_tcp 00:11:10.719 rmmod nvme_fabrics 00:11:10.719 rmmod nvme_keyring 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2071773 ']' 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2071773 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2071773 ']' 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2071773 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2071773 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2071773' 00:11:10.977 killing process with pid 2071773 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2071773 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2071773 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.977 16:37:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:13.513 00:11:13.513 real 0m9.151s 00:11:13.513 user 0m9.163s 00:11:13.513 sys 0m4.451s 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:13.513 ************************************ 00:11:13.513 END TEST nvmf_bdevio 00:11:13.513 ************************************ 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:13.513 00:11:13.513 real 4m23.715s 00:11:13.513 user 10m50.640s 00:11:13.513 sys 1m27.050s 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:13.513 ************************************ 00:11:13.513 END TEST nvmf_target_core 00:11:13.513 ************************************ 00:11:13.513 16:38:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:13.513 16:38:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.513 16:38:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.513 16:38:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:13.513 ************************************ 00:11:13.513 START TEST nvmf_target_extra 00:11:13.513 ************************************ 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:13.513 * Looking for test storage... 00:11:13.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.513 --rc genhtml_branch_coverage=1 00:11:13.513 --rc genhtml_function_coverage=1 00:11:13.513 --rc genhtml_legend=1 00:11:13.513 --rc geninfo_all_blocks=1 00:11:13.513 --rc geninfo_unexecuted_blocks=1 00:11:13.513 00:11:13.513 ' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.513 --rc genhtml_branch_coverage=1 00:11:13.513 --rc genhtml_function_coverage=1 00:11:13.513 --rc genhtml_legend=1 00:11:13.513 --rc geninfo_all_blocks=1 00:11:13.513 --rc geninfo_unexecuted_blocks=1 00:11:13.513 00:11:13.513 ' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.513 --rc genhtml_branch_coverage=1 00:11:13.513 --rc genhtml_function_coverage=1 00:11:13.513 --rc genhtml_legend=1 00:11:13.513 --rc geninfo_all_blocks=1 00:11:13.513 --rc geninfo_unexecuted_blocks=1 00:11:13.513 00:11:13.513 ' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.513 --rc genhtml_branch_coverage=1 00:11:13.513 --rc genhtml_function_coverage=1 00:11:13.513 --rc genhtml_legend=1 00:11:13.513 --rc geninfo_all_blocks=1 00:11:13.513 --rc geninfo_unexecuted_blocks=1 00:11:13.513 00:11:13.513 ' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.513 16:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 ************************************ 00:11:13.514 START TEST nvmf_example 00:11:13.514 ************************************ 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:13.514 * Looking for test storage... 00:11:13.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.514 16:38:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:13.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.514 --rc genhtml_branch_coverage=1 00:11:13.514 --rc genhtml_function_coverage=1 00:11:13.514 --rc genhtml_legend=1 00:11:13.514 --rc geninfo_all_blocks=1 00:11:13.514 --rc geninfo_unexecuted_blocks=1 00:11:13.514 00:11:13.514 ' 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:13.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.514 --rc genhtml_branch_coverage=1 00:11:13.514 --rc genhtml_function_coverage=1 00:11:13.514 --rc genhtml_legend=1 00:11:13.514 --rc geninfo_all_blocks=1 00:11:13.514 --rc geninfo_unexecuted_blocks=1 00:11:13.514 00:11:13.514 ' 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:13.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.514 --rc genhtml_branch_coverage=1 00:11:13.514 --rc genhtml_function_coverage=1 00:11:13.514 --rc genhtml_legend=1 00:11:13.514 --rc geninfo_all_blocks=1 00:11:13.514 --rc geninfo_unexecuted_blocks=1 00:11:13.514 00:11:13.514 ' 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:13.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.514 --rc genhtml_branch_coverage=1 00:11:13.514 --rc genhtml_function_coverage=1 00:11:13.514 --rc genhtml_legend=1 00:11:13.514 --rc geninfo_all_blocks=1 00:11:13.514 --rc geninfo_unexecuted_blocks=1 00:11:13.514 00:11:13.514 ' 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.514 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:13.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:13.515 16:38:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:18.853 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:18.853 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:18.853 Found net devices under 0000:31:00.0: cvl_0_0 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:18.853 Found net devices under 0000:31:00.1: cvl_0_1 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:11:18.853 00:11:18.853 --- 10.0.0.2 ping statistics --- 00:11:18.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.853 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:11:18.853 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:11:18.853 00:11:18.853 --- 10.0.0.1 ping statistics --- 00:11:18.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.853 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2076557 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2076557 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2076557 ']' 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.854 16:38:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.795 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:19.796 16:38:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:32.016 Initializing NVMe Controllers 00:11:32.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:32.016 Initialization complete. Launching workers. 00:11:32.016 ======================================================== 00:11:32.016 Latency(us) 00:11:32.016 Device Information : IOPS MiB/s Average min max 00:11:32.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19862.29 77.59 3221.84 620.77 15513.94 00:11:32.016 ======================================================== 00:11:32.016 Total : 19862.29 77.59 3221.84 620.77 15513.94 00:11:32.017 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.017 rmmod nvme_tcp 00:11:32.017 rmmod nvme_fabrics 00:11:32.017 rmmod nvme_keyring 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2076557 ']' 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2076557 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2076557 ']' 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2076557 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2076557 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2076557' 00:11:32.017 killing process with pid 2076557 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2076557 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2076557 00:11:32.017 nvmf threads initialize successfully 00:11:32.017 bdev subsystem init successfully 00:11:32.017 created a nvmf target service 00:11:32.017 create targets's poll groups done 00:11:32.017 all subsystems of target started 00:11:32.017 nvmf target is running 00:11:32.017 all subsystems of target stopped 00:11:32.017 destroy targets's poll groups done 00:11:32.017 destroyed the nvmf target service 00:11:32.017 bdev subsystem finish successfully 00:11:32.017 nvmf threads destroy successfully 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.017 16:38:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.591 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.591 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:32.591 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.591 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.591 00:11:32.591 real 0m19.124s 00:11:32.591 user 0m45.909s 00:11:32.591 sys 0m5.261s 00:11:32.591 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.591 16:38:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:32.591 ************************************ 00:11:32.591 END TEST nvmf_example 00:11:32.591 ************************************ 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.591 ************************************ 00:11:32.591 START TEST nvmf_filesystem 00:11:32.591 ************************************ 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:32.591 * Looking for test storage... 00:11:32.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.591 --rc genhtml_branch_coverage=1 00:11:32.591 --rc genhtml_function_coverage=1 00:11:32.591 --rc genhtml_legend=1 00:11:32.591 --rc geninfo_all_blocks=1 00:11:32.591 --rc geninfo_unexecuted_blocks=1 00:11:32.591 00:11:32.591 ' 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.591 --rc genhtml_branch_coverage=1 00:11:32.591 --rc genhtml_function_coverage=1 00:11:32.591 --rc genhtml_legend=1 00:11:32.591 --rc geninfo_all_blocks=1 00:11:32.591 --rc geninfo_unexecuted_blocks=1 00:11:32.591 00:11:32.591 ' 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.591 --rc genhtml_branch_coverage=1 00:11:32.591 --rc genhtml_function_coverage=1 00:11:32.591 --rc genhtml_legend=1 00:11:32.591 --rc geninfo_all_blocks=1 00:11:32.591 --rc geninfo_unexecuted_blocks=1 00:11:32.591 00:11:32.591 ' 00:11:32.591 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.592 --rc genhtml_branch_coverage=1 00:11:32.592 --rc genhtml_function_coverage=1 00:11:32.592 --rc genhtml_legend=1 00:11:32.592 --rc geninfo_all_blocks=1 00:11:32.592 --rc geninfo_unexecuted_blocks=1 00:11:32.592 00:11:32.592 ' 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:32.592 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:32.593 #define SPDK_CONFIG_H 00:11:32.593 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:32.593 #define SPDK_CONFIG_APPS 1 00:11:32.593 #define SPDK_CONFIG_ARCH native 00:11:32.593 #undef SPDK_CONFIG_ASAN 00:11:32.593 #undef SPDK_CONFIG_AVAHI 00:11:32.593 #undef SPDK_CONFIG_CET 00:11:32.593 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:32.593 #define SPDK_CONFIG_COVERAGE 1 00:11:32.593 #define SPDK_CONFIG_CROSS_PREFIX 00:11:32.593 #undef SPDK_CONFIG_CRYPTO 00:11:32.593 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:32.593 #undef SPDK_CONFIG_CUSTOMOCF 00:11:32.593 #undef SPDK_CONFIG_DAOS 00:11:32.593 #define SPDK_CONFIG_DAOS_DIR 00:11:32.593 #define SPDK_CONFIG_DEBUG 1 00:11:32.593 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:32.593 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:32.593 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:32.593 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:32.593 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:32.593 #undef SPDK_CONFIG_DPDK_UADK 00:11:32.593 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:32.593 #define SPDK_CONFIG_EXAMPLES 1 00:11:32.593 #undef SPDK_CONFIG_FC 00:11:32.593 #define SPDK_CONFIG_FC_PATH 00:11:32.593 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:32.593 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:32.593 #define SPDK_CONFIG_FSDEV 1 00:11:32.593 #undef SPDK_CONFIG_FUSE 00:11:32.593 #undef SPDK_CONFIG_FUZZER 00:11:32.593 #define SPDK_CONFIG_FUZZER_LIB 00:11:32.593 #undef SPDK_CONFIG_GOLANG 00:11:32.593 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:32.593 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:32.593 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:32.593 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:32.593 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:32.593 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:32.593 #undef SPDK_CONFIG_HAVE_LZ4 00:11:32.593 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:32.593 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:32.593 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:32.593 #define SPDK_CONFIG_IDXD 1 00:11:32.593 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:32.593 #undef SPDK_CONFIG_IPSEC_MB 00:11:32.593 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:32.593 #define SPDK_CONFIG_ISAL 1 00:11:32.593 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:32.593 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:32.593 #define SPDK_CONFIG_LIBDIR 00:11:32.593 #undef SPDK_CONFIG_LTO 00:11:32.593 #define SPDK_CONFIG_MAX_LCORES 128 00:11:32.593 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:32.593 #define SPDK_CONFIG_NVME_CUSE 1 00:11:32.593 #undef SPDK_CONFIG_OCF 00:11:32.593 #define SPDK_CONFIG_OCF_PATH 00:11:32.593 #define SPDK_CONFIG_OPENSSL_PATH 00:11:32.593 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:32.593 #define SPDK_CONFIG_PGO_DIR 00:11:32.593 #undef SPDK_CONFIG_PGO_USE 00:11:32.593 #define SPDK_CONFIG_PREFIX /usr/local 00:11:32.593 #undef SPDK_CONFIG_RAID5F 00:11:32.593 #undef SPDK_CONFIG_RBD 00:11:32.593 #define SPDK_CONFIG_RDMA 1 00:11:32.593 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:32.593 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:32.593 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:32.593 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:32.593 #define SPDK_CONFIG_SHARED 1 00:11:32.593 #undef SPDK_CONFIG_SMA 00:11:32.593 #define SPDK_CONFIG_TESTS 1 00:11:32.593 #undef SPDK_CONFIG_TSAN 00:11:32.593 #define SPDK_CONFIG_UBLK 1 00:11:32.593 #define SPDK_CONFIG_UBSAN 1 00:11:32.593 #undef SPDK_CONFIG_UNIT_TESTS 00:11:32.593 #undef SPDK_CONFIG_URING 00:11:32.593 #define SPDK_CONFIG_URING_PATH 00:11:32.593 #undef SPDK_CONFIG_URING_ZNS 00:11:32.593 #undef SPDK_CONFIG_USDT 00:11:32.593 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:32.593 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:32.593 #define SPDK_CONFIG_VFIO_USER 1 00:11:32.593 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:32.593 #define SPDK_CONFIG_VHOST 1 00:11:32.593 #define SPDK_CONFIG_VIRTIO 1 00:11:32.593 #undef SPDK_CONFIG_VTUNE 00:11:32.593 #define SPDK_CONFIG_VTUNE_DIR 00:11:32.593 #define SPDK_CONFIG_WERROR 1 00:11:32.593 #define SPDK_CONFIG_WPDK_DIR 00:11:32.593 #undef SPDK_CONFIG_XNVME 00:11:32.593 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:32.593 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:32.594 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:32.595 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2079666 ]] 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2079666 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Epy2Fh 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Epy2Fh/tests/target /tmp/spdk.Epy2Fh 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=121308577792 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356533760 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8047955968 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668233728 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847713792 00:11:32.596 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23592960 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=349184 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=154624 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677855232 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678268928 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=413696 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:32.597 * Looking for test storage... 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=121308577792 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10262548480 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:32.597 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:32.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.859 --rc genhtml_branch_coverage=1 00:11:32.859 --rc genhtml_function_coverage=1 00:11:32.859 --rc genhtml_legend=1 00:11:32.859 --rc geninfo_all_blocks=1 00:11:32.859 --rc geninfo_unexecuted_blocks=1 00:11:32.859 00:11:32.859 ' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:32.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.859 --rc genhtml_branch_coverage=1 00:11:32.859 --rc genhtml_function_coverage=1 00:11:32.859 --rc genhtml_legend=1 00:11:32.859 --rc geninfo_all_blocks=1 00:11:32.859 --rc geninfo_unexecuted_blocks=1 00:11:32.859 00:11:32.859 ' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:32.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.859 --rc genhtml_branch_coverage=1 00:11:32.859 --rc genhtml_function_coverage=1 00:11:32.859 --rc genhtml_legend=1 00:11:32.859 --rc geninfo_all_blocks=1 00:11:32.859 --rc geninfo_unexecuted_blocks=1 00:11:32.859 00:11:32.859 ' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:32.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.859 --rc genhtml_branch_coverage=1 00:11:32.859 --rc genhtml_function_coverage=1 00:11:32.859 --rc genhtml_legend=1 00:11:32.859 --rc geninfo_all_blocks=1 00:11:32.859 --rc geninfo_unexecuted_blocks=1 00:11:32.859 00:11:32.859 ' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.859 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.860 16:38:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:38.137 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:38.137 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:38.137 Found net devices under 0000:31:00.0: cvl_0_0 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:38.137 Found net devices under 0000:31:00.1: cvl_0_1 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.137 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:11:38.138 00:11:38.138 --- 10.0.0.2 ping statistics --- 00:11:38.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.138 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:11:38.138 00:11:38.138 --- 10.0.0.1 ping statistics --- 00:11:38.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.138 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.138 ************************************ 00:11:38.138 START TEST nvmf_filesystem_no_in_capsule 00:11:38.138 ************************************ 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2083333 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2083333 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2083333 ']' 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.138 16:38:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.398 [2024-12-06 16:38:26.849668] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:38.398 [2024-12-06 16:38:26.849719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.398 [2024-12-06 16:38:26.936393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.398 [2024-12-06 16:38:26.959089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.398 [2024-12-06 16:38:26.959143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.398 [2024-12-06 16:38:26.959152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.398 [2024-12-06 16:38:26.959159] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.398 [2024-12-06 16:38:26.959165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.398 [2024-12-06 16:38:26.960889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.398 [2024-12-06 16:38:26.961047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.398 [2024-12-06 16:38:26.961202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.398 [2024-12-06 16:38:26.961202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.969 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.969 [2024-12-06 16:38:27.658664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.229 Malloc1 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.229 [2024-12-06 16:38:27.786672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:39.229 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:39.230 { 00:11:39.230 "name": "Malloc1", 00:11:39.230 "aliases": [ 00:11:39.230 "54fa9fdf-6c1f-4494-85cf-94cf43a38250" 00:11:39.230 ], 00:11:39.230 "product_name": "Malloc disk", 00:11:39.230 "block_size": 512, 00:11:39.230 "num_blocks": 1048576, 00:11:39.230 "uuid": "54fa9fdf-6c1f-4494-85cf-94cf43a38250", 00:11:39.230 "assigned_rate_limits": { 00:11:39.230 "rw_ios_per_sec": 0, 00:11:39.230 "rw_mbytes_per_sec": 0, 00:11:39.230 "r_mbytes_per_sec": 0, 00:11:39.230 "w_mbytes_per_sec": 0 00:11:39.230 }, 00:11:39.230 "claimed": true, 00:11:39.230 "claim_type": "exclusive_write", 00:11:39.230 "zoned": false, 00:11:39.230 "supported_io_types": { 00:11:39.230 "read": true, 00:11:39.230 "write": true, 00:11:39.230 "unmap": true, 00:11:39.230 "flush": true, 00:11:39.230 "reset": true, 00:11:39.230 "nvme_admin": false, 00:11:39.230 "nvme_io": false, 00:11:39.230 "nvme_io_md": false, 00:11:39.230 "write_zeroes": true, 00:11:39.230 "zcopy": true, 00:11:39.230 "get_zone_info": false, 00:11:39.230 "zone_management": false, 00:11:39.230 "zone_append": false, 00:11:39.230 "compare": false, 00:11:39.230 "compare_and_write": false, 00:11:39.230 "abort": true, 00:11:39.230 "seek_hole": false, 00:11:39.230 "seek_data": false, 00:11:39.230 "copy": true, 00:11:39.230 "nvme_iov_md": false 00:11:39.230 }, 00:11:39.230 "memory_domains": [ 00:11:39.230 { 00:11:39.230 "dma_device_id": "system", 00:11:39.230 "dma_device_type": 1 00:11:39.230 }, 00:11:39.230 { 00:11:39.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.230 "dma_device_type": 2 00:11:39.230 } 00:11:39.230 ], 00:11:39.230 "driver_specific": {} 00:11:39.230 } 00:11:39.230 ]' 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:39.230 16:38:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.139 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:41.139 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:41.139 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.139 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:41.139 16:38:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:43.049 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:43.310 16:38:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.691 ************************************ 00:11:44.691 START TEST filesystem_ext4 00:11:44.691 ************************************ 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:44.691 16:38:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:44.691 mke2fs 1.47.0 (5-Feb-2023) 00:11:44.691 Discarding device blocks: 0/522240 done 00:11:44.691 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:44.691 Filesystem UUID: b2574c8d-f6ed-4a29-9e63-25180ad7a913 00:11:44.691 Superblock backups stored on blocks: 00:11:44.691 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:44.691 00:11:44.691 Allocating group tables: 0/64 done 00:11:44.691 Writing inode tables: 0/64 done 00:11:44.691 Creating journal (8192 blocks): done 00:11:46.894 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:11:46.894 00:11:46.894 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:46.894 16:38:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.498 16:38:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2083333 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.498 00:11:53.498 real 0m8.085s 00:11:53.498 user 0m0.018s 00:11:53.498 sys 0m0.058s 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:53.498 ************************************ 00:11:53.498 END TEST filesystem_ext4 00:11:53.498 ************************************ 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.498 ************************************ 00:11:53.498 START TEST filesystem_btrfs 00:11:53.498 ************************************ 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:53.498 btrfs-progs v6.8.1 00:11:53.498 See https://btrfs.readthedocs.io for more information. 00:11:53.498 00:11:53.498 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:53.498 NOTE: several default settings have changed in version 5.15, please make sure 00:11:53.498 this does not affect your deployments: 00:11:53.498 - DUP for metadata (-m dup) 00:11:53.498 - enabled no-holes (-O no-holes) 00:11:53.498 - enabled free-space-tree (-R free-space-tree) 00:11:53.498 00:11:53.498 Label: (null) 00:11:53.498 UUID: e0779fe5-cc22-4fde-a483-de2360cfe352 00:11:53.498 Node size: 16384 00:11:53.498 Sector size: 4096 (CPU page size: 4096) 00:11:53.498 Filesystem size: 510.00MiB 00:11:53.498 Block group profiles: 00:11:53.498 Data: single 8.00MiB 00:11:53.498 Metadata: DUP 32.00MiB 00:11:53.498 System: DUP 8.00MiB 00:11:53.498 SSD detected: yes 00:11:53.498 Zoned device: no 00:11:53.498 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:53.498 Checksum: crc32c 00:11:53.498 Number of devices: 1 00:11:53.498 Devices: 00:11:53.498 ID SIZE PATH 00:11:53.498 1 510.00MiB /dev/nvme0n1p1 00:11:53.498 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2083333 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.498 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.499 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.499 00:11:53.499 real 0m0.846s 00:11:53.499 user 0m0.024s 00:11:53.499 sys 0m0.083s 00:11:53.499 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.499 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.499 ************************************ 00:11:53.499 END TEST filesystem_btrfs 00:11:53.499 ************************************ 00:11:53.499 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:53.499 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:53.499 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.499 16:38:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.499 ************************************ 00:11:53.499 START TEST filesystem_xfs 00:11:53.499 ************************************ 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:53.499 16:38:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:53.499 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:53.499 = sectsz=512 attr=2, projid32bit=1 00:11:53.499 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:53.499 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:53.499 data = bsize=4096 blocks=130560, imaxpct=25 00:11:53.499 = sunit=0 swidth=0 blks 00:11:53.499 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:53.499 log =internal log bsize=4096 blocks=16384, version=2 00:11:53.499 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:53.499 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:54.437 Discarding blocks...Done. 00:11:54.437 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:54.437 16:38:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2083333 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.346 00:11:56.346 real 0m2.986s 00:11:56.346 user 0m0.014s 00:11:56.346 sys 0m0.057s 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.346 16:38:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.346 ************************************ 00:11:56.346 END TEST filesystem_xfs 00:11:56.346 ************************************ 00:11:56.346 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:56.605 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:56.605 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2083333 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2083333 ']' 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2083333 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2083333 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2083333' 00:11:56.864 killing process with pid 2083333 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2083333 00:11:56.864 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2083333 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:57.123 00:11:57.123 real 0m18.816s 00:11:57.123 user 1m14.372s 00:11:57.123 sys 0m1.155s 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 ************************************ 00:11:57.123 END TEST nvmf_filesystem_no_in_capsule 00:11:57.123 ************************************ 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 ************************************ 00:11:57.123 START TEST nvmf_filesystem_in_capsule 00:11:57.123 ************************************ 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2087718 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2087718 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2087718 ']' 00:11:57.123 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.124 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.124 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.124 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.124 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.124 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.124 [2024-12-06 16:38:45.713904] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:11:57.124 [2024-12-06 16:38:45.713956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.124 [2024-12-06 16:38:45.785190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.124 [2024-12-06 16:38:45.802720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.124 [2024-12-06 16:38:45.802754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.124 [2024-12-06 16:38:45.802760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.124 [2024-12-06 16:38:45.802765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.124 [2024-12-06 16:38:45.802769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.124 [2024-12-06 16:38:45.804223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.124 [2024-12-06 16:38:45.804370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.124 [2024-12-06 16:38:45.804528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.124 [2024-12-06 16:38:45.804530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.383 [2024-12-06 16:38:45.904609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.383 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.384 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:57.384 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.384 16:38:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.384 Malloc1 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.384 [2024-12-06 16:38:46.019414] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:57.384 { 00:11:57.384 "name": "Malloc1", 00:11:57.384 "aliases": [ 00:11:57.384 "286d126a-00dd-4c25-8feb-595bf83043b4" 00:11:57.384 ], 00:11:57.384 "product_name": "Malloc disk", 00:11:57.384 "block_size": 512, 00:11:57.384 "num_blocks": 1048576, 00:11:57.384 "uuid": "286d126a-00dd-4c25-8feb-595bf83043b4", 00:11:57.384 "assigned_rate_limits": { 00:11:57.384 "rw_ios_per_sec": 0, 00:11:57.384 "rw_mbytes_per_sec": 0, 00:11:57.384 "r_mbytes_per_sec": 0, 00:11:57.384 "w_mbytes_per_sec": 0 00:11:57.384 }, 00:11:57.384 "claimed": true, 00:11:57.384 "claim_type": "exclusive_write", 00:11:57.384 "zoned": false, 00:11:57.384 "supported_io_types": { 00:11:57.384 "read": true, 00:11:57.384 "write": true, 00:11:57.384 "unmap": true, 00:11:57.384 "flush": true, 00:11:57.384 "reset": true, 00:11:57.384 "nvme_admin": false, 00:11:57.384 "nvme_io": false, 00:11:57.384 "nvme_io_md": false, 00:11:57.384 "write_zeroes": true, 00:11:57.384 "zcopy": true, 00:11:57.384 "get_zone_info": false, 00:11:57.384 "zone_management": false, 00:11:57.384 "zone_append": false, 00:11:57.384 "compare": false, 00:11:57.384 "compare_and_write": false, 00:11:57.384 "abort": true, 00:11:57.384 "seek_hole": false, 00:11:57.384 "seek_data": false, 00:11:57.384 "copy": true, 00:11:57.384 "nvme_iov_md": false 00:11:57.384 }, 00:11:57.384 "memory_domains": [ 00:11:57.384 { 00:11:57.384 "dma_device_id": "system", 00:11:57.384 "dma_device_type": 1 00:11:57.384 }, 00:11:57.384 { 00:11:57.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.384 "dma_device_type": 2 00:11:57.384 } 00:11:57.384 ], 00:11:57.384 "driver_specific": {} 00:11:57.384 } 00:11:57.384 ]' 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:57.384 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:57.643 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:57.643 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:57.643 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:57.643 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:57.643 16:38:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.022 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.022 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:59.022 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.022 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:59.022 16:38:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:01.560 16:38:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:02.496 16:38:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:03.435 ************************************ 00:12:03.435 START TEST filesystem_in_capsule_ext4 00:12:03.435 ************************************ 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:03.435 16:38:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:03.435 mke2fs 1.47.0 (5-Feb-2023) 00:12:03.435 Discarding device blocks: 0/522240 done 00:12:03.435 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:03.435 Filesystem UUID: ab07098e-07dd-4c4b-91de-a959f17b0b2d 00:12:03.435 Superblock backups stored on blocks: 00:12:03.435 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:03.435 00:12:03.435 Allocating group tables: 0/64 done 00:12:03.435 Writing inode tables: 0/64 done 00:12:03.694 Creating journal (8192 blocks): done 00:12:05.895 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:12:05.895 00:12:05.895 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:05.895 16:38:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.168 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.168 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:11.168 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.168 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:11.168 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:11.168 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:11.427 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2087718 00:12:11.427 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:11.427 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:11.427 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:11.427 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:11.427 00:12:11.427 real 0m7.997s 00:12:11.427 user 0m0.017s 00:12:11.428 sys 0m0.059s 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:11.428 ************************************ 00:12:11.428 END TEST filesystem_in_capsule_ext4 00:12:11.428 ************************************ 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:11.428 ************************************ 00:12:11.428 START TEST filesystem_in_capsule_btrfs 00:12:11.428 ************************************ 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:11.428 16:38:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:11.687 btrfs-progs v6.8.1 00:12:11.687 See https://btrfs.readthedocs.io for more information. 00:12:11.687 00:12:11.687 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:11.687 NOTE: several default settings have changed in version 5.15, please make sure 00:12:11.687 this does not affect your deployments: 00:12:11.687 - DUP for metadata (-m dup) 00:12:11.687 - enabled no-holes (-O no-holes) 00:12:11.687 - enabled free-space-tree (-R free-space-tree) 00:12:11.687 00:12:11.687 Label: (null) 00:12:11.687 UUID: a743b1b0-eb05-4596-9ebc-9b66cdded5f4 00:12:11.687 Node size: 16384 00:12:11.687 Sector size: 4096 (CPU page size: 4096) 00:12:11.687 Filesystem size: 510.00MiB 00:12:11.687 Block group profiles: 00:12:11.687 Data: single 8.00MiB 00:12:11.687 Metadata: DUP 32.00MiB 00:12:11.687 System: DUP 8.00MiB 00:12:11.687 SSD detected: yes 00:12:11.687 Zoned device: no 00:12:11.687 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:11.687 Checksum: crc32c 00:12:11.687 Number of devices: 1 00:12:11.687 Devices: 00:12:11.687 ID SIZE PATH 00:12:11.687 1 510.00MiB /dev/nvme0n1p1 00:12:11.687 00:12:11.687 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:11.687 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:11.946 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:11.946 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:11.946 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:11.946 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:11.946 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:11.946 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.206 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2087718 00:12:12.206 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.206 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.206 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.206 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.206 00:12:12.206 real 0m0.733s 00:12:12.206 user 0m0.022s 00:12:12.206 sys 0m0.092s 00:12:12.206 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.206 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:12.206 ************************************ 00:12:12.206 END TEST filesystem_in_capsule_btrfs 00:12:12.206 ************************************ 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.207 ************************************ 00:12:12.207 START TEST filesystem_in_capsule_xfs 00:12:12.207 ************************************ 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:12.207 16:39:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:12.207 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:12.207 = sectsz=512 attr=2, projid32bit=1 00:12:12.207 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:12.207 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:12.207 data = bsize=4096 blocks=130560, imaxpct=25 00:12:12.207 = sunit=0 swidth=0 blks 00:12:12.207 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:12.207 log =internal log bsize=4096 blocks=16384, version=2 00:12:12.207 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:12.207 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:13.147 Discarding blocks...Done. 00:12:13.147 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:13.147 16:39:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2087718 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:15.059 00:12:15.059 real 0m2.953s 00:12:15.059 user 0m0.014s 00:12:15.059 sys 0m0.060s 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:15.059 ************************************ 00:12:15.059 END TEST filesystem_in_capsule_xfs 00:12:15.059 ************************************ 00:12:15.059 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:15.319 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:15.319 16:39:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2087718 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2087718 ']' 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2087718 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2087718 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2087718' 00:12:15.580 killing process with pid 2087718 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2087718 00:12:15.580 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2087718 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:15.840 00:12:15.840 real 0m18.648s 00:12:15.840 user 1m13.671s 00:12:15.840 sys 0m1.136s 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:15.840 ************************************ 00:12:15.840 END TEST nvmf_filesystem_in_capsule 00:12:15.840 ************************************ 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.840 rmmod nvme_tcp 00:12:15.840 rmmod nvme_fabrics 00:12:15.840 rmmod nvme_keyring 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.840 16:39:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.379 00:12:18.379 real 0m45.417s 00:12:18.379 user 2m29.648s 00:12:18.379 sys 0m6.508s 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:18.379 ************************************ 00:12:18.379 END TEST nvmf_filesystem 00:12:18.379 ************************************ 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.379 ************************************ 00:12:18.379 START TEST nvmf_target_discovery 00:12:18.379 ************************************ 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:18.379 * Looking for test storage... 00:12:18.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:18.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.379 --rc genhtml_branch_coverage=1 00:12:18.379 --rc genhtml_function_coverage=1 00:12:18.379 --rc genhtml_legend=1 00:12:18.379 --rc geninfo_all_blocks=1 00:12:18.379 --rc geninfo_unexecuted_blocks=1 00:12:18.379 00:12:18.379 ' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:18.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.379 --rc genhtml_branch_coverage=1 00:12:18.379 --rc genhtml_function_coverage=1 00:12:18.379 --rc genhtml_legend=1 00:12:18.379 --rc geninfo_all_blocks=1 00:12:18.379 --rc geninfo_unexecuted_blocks=1 00:12:18.379 00:12:18.379 ' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:18.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.379 --rc genhtml_branch_coverage=1 00:12:18.379 --rc genhtml_function_coverage=1 00:12:18.379 --rc genhtml_legend=1 00:12:18.379 --rc geninfo_all_blocks=1 00:12:18.379 --rc geninfo_unexecuted_blocks=1 00:12:18.379 00:12:18.379 ' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:18.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.379 --rc genhtml_branch_coverage=1 00:12:18.379 --rc genhtml_function_coverage=1 00:12:18.379 --rc genhtml_legend=1 00:12:18.379 --rc geninfo_all_blocks=1 00:12:18.379 --rc geninfo_unexecuted_blocks=1 00:12:18.379 00:12:18.379 ' 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.379 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.380 16:39:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:23.654 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.654 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:23.655 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:23.655 Found net devices under 0000:31:00.0: cvl_0_0 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:23.655 Found net devices under 0000:31:00.1: cvl_0_1 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.655 16:39:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:12:23.655 00:12:23.655 --- 10.0.0.2 ping statistics --- 00:12:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.655 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:12:23.655 00:12:23.655 --- 10.0.0.1 ping statistics --- 00:12:23.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.655 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2096798 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2096798 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2096798 ']' 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:23.655 16:39:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.655 [2024-12-06 16:39:12.214719] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:12:23.655 [2024-12-06 16:39:12.214784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.655 [2024-12-06 16:39:12.305171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.655 [2024-12-06 16:39:12.333544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.655 [2024-12-06 16:39:12.333596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.655 [2024-12-06 16:39:12.333605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.655 [2024-12-06 16:39:12.333612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.655 [2024-12-06 16:39:12.333618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.655 [2024-12-06 16:39:12.335486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.655 [2024-12-06 16:39:12.335649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.655 [2024-12-06 16:39:12.335811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.655 [2024-12-06 16:39:12.335811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 [2024-12-06 16:39:13.031444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 Null1 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 [2024-12-06 16:39:13.081427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 Null2 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 Null3 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 Null4 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.593 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.594 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:24.594 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.594 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.594 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.594 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:12:24.854 00:12:24.854 Discovery Log Number of Records 6, Generation counter 6 00:12:24.854 =====Discovery Log Entry 0====== 00:12:24.854 trtype: tcp 00:12:24.854 adrfam: ipv4 00:12:24.854 subtype: current discovery subsystem 00:12:24.854 treq: not required 00:12:24.854 portid: 0 00:12:24.854 trsvcid: 4420 00:12:24.854 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:24.854 traddr: 10.0.0.2 00:12:24.854 eflags: explicit discovery connections, duplicate discovery information 00:12:24.854 sectype: none 00:12:24.854 =====Discovery Log Entry 1====== 00:12:24.854 trtype: tcp 00:12:24.854 adrfam: ipv4 00:12:24.854 subtype: nvme subsystem 00:12:24.854 treq: not required 00:12:24.854 portid: 0 00:12:24.854 trsvcid: 4420 00:12:24.854 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:24.854 traddr: 10.0.0.2 00:12:24.854 eflags: none 00:12:24.854 sectype: none 00:12:24.854 =====Discovery Log Entry 2====== 00:12:24.854 trtype: tcp 00:12:24.854 adrfam: ipv4 00:12:24.854 subtype: nvme subsystem 00:12:24.854 treq: not required 00:12:24.854 portid: 0 00:12:24.854 trsvcid: 4420 00:12:24.854 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:24.854 traddr: 10.0.0.2 00:12:24.854 eflags: none 00:12:24.854 sectype: none 00:12:24.854 =====Discovery Log Entry 3====== 00:12:24.854 trtype: tcp 00:12:24.854 adrfam: ipv4 00:12:24.854 subtype: nvme subsystem 00:12:24.854 treq: not required 00:12:24.854 portid: 0 00:12:24.854 trsvcid: 4420 00:12:24.854 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:24.854 traddr: 10.0.0.2 00:12:24.854 eflags: none 00:12:24.854 sectype: none 00:12:24.854 =====Discovery Log Entry 4====== 00:12:24.854 trtype: tcp 00:12:24.854 adrfam: ipv4 00:12:24.854 subtype: nvme subsystem 00:12:24.854 treq: not required 00:12:24.854 portid: 0 00:12:24.854 trsvcid: 4420 00:12:24.854 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:24.854 traddr: 10.0.0.2 00:12:24.854 eflags: none 00:12:24.854 sectype: none 00:12:24.854 =====Discovery Log Entry 5====== 00:12:24.854 trtype: tcp 00:12:24.854 adrfam: ipv4 00:12:24.854 subtype: discovery subsystem referral 00:12:24.854 treq: not required 00:12:24.854 portid: 0 00:12:24.854 trsvcid: 4430 00:12:24.854 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:24.854 traddr: 10.0.0.2 00:12:24.854 eflags: none 00:12:24.854 sectype: none 00:12:24.854 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:24.854 Perform nvmf subsystem discovery via RPC 00:12:24.854 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:24.854 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.854 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.854 [ 00:12:24.854 { 00:12:24.854 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:24.854 "subtype": "Discovery", 00:12:24.854 "listen_addresses": [ 00:12:24.854 { 00:12:24.854 "trtype": "TCP", 00:12:24.854 "adrfam": "IPv4", 00:12:24.854 "traddr": "10.0.0.2", 00:12:24.854 "trsvcid": "4420" 00:12:24.854 } 00:12:24.854 ], 00:12:24.854 "allow_any_host": true, 00:12:24.854 "hosts": [] 00:12:24.854 }, 00:12:24.854 { 00:12:24.854 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:24.854 "subtype": "NVMe", 00:12:24.854 "listen_addresses": [ 00:12:24.854 { 00:12:24.854 "trtype": "TCP", 00:12:24.854 "adrfam": "IPv4", 00:12:24.854 "traddr": "10.0.0.2", 00:12:24.854 "trsvcid": "4420" 00:12:24.854 } 00:12:24.854 ], 00:12:24.854 "allow_any_host": true, 00:12:24.854 "hosts": [], 00:12:24.854 "serial_number": "SPDK00000000000001", 00:12:24.854 "model_number": "SPDK bdev Controller", 00:12:24.854 "max_namespaces": 32, 00:12:24.854 "min_cntlid": 1, 00:12:24.854 "max_cntlid": 65519, 00:12:24.854 "namespaces": [ 00:12:24.854 { 00:12:24.854 "nsid": 1, 00:12:24.854 "bdev_name": "Null1", 00:12:24.854 "name": "Null1", 00:12:24.854 "nguid": "966CB1C15EE9411986DA279CB2371547", 00:12:24.854 "uuid": "966cb1c1-5ee9-4119-86da-279cb2371547" 00:12:24.854 } 00:12:24.854 ] 00:12:24.854 }, 00:12:24.854 { 00:12:24.854 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:24.854 "subtype": "NVMe", 00:12:24.854 "listen_addresses": [ 00:12:24.854 { 00:12:24.855 "trtype": "TCP", 00:12:24.855 "adrfam": "IPv4", 00:12:24.855 "traddr": "10.0.0.2", 00:12:24.855 "trsvcid": "4420" 00:12:24.855 } 00:12:24.855 ], 00:12:24.855 "allow_any_host": true, 00:12:24.855 "hosts": [], 00:12:24.855 "serial_number": "SPDK00000000000002", 00:12:24.855 "model_number": "SPDK bdev Controller", 00:12:24.855 "max_namespaces": 32, 00:12:24.855 "min_cntlid": 1, 00:12:24.855 "max_cntlid": 65519, 00:12:24.855 "namespaces": [ 00:12:24.855 { 00:12:24.855 "nsid": 1, 00:12:24.855 "bdev_name": "Null2", 00:12:24.855 "name": "Null2", 00:12:24.855 "nguid": "F2A091FC23A24C24B16AC486B83CBE77", 00:12:24.855 "uuid": "f2a091fc-23a2-4c24-b16a-c486b83cbe77" 00:12:24.855 } 00:12:24.855 ] 00:12:24.855 }, 00:12:24.855 { 00:12:24.855 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:24.855 "subtype": "NVMe", 00:12:24.855 "listen_addresses": [ 00:12:24.855 { 00:12:24.855 "trtype": "TCP", 00:12:24.855 "adrfam": "IPv4", 00:12:24.855 "traddr": "10.0.0.2", 00:12:24.855 "trsvcid": "4420" 00:12:24.855 } 00:12:24.855 ], 00:12:24.855 "allow_any_host": true, 00:12:24.855 "hosts": [], 00:12:24.855 "serial_number": "SPDK00000000000003", 00:12:24.855 "model_number": "SPDK bdev Controller", 00:12:24.855 "max_namespaces": 32, 00:12:24.855 "min_cntlid": 1, 00:12:24.855 "max_cntlid": 65519, 00:12:24.855 "namespaces": [ 00:12:24.855 { 00:12:24.855 "nsid": 1, 00:12:24.855 "bdev_name": "Null3", 00:12:24.855 "name": "Null3", 00:12:24.855 "nguid": "089CD390986546B4B6DE3EEB59F2D8B0", 00:12:24.855 "uuid": "089cd390-9865-46b4-b6de-3eeb59f2d8b0" 00:12:24.855 } 00:12:24.855 ] 00:12:24.855 }, 00:12:24.855 { 00:12:24.855 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:24.855 "subtype": "NVMe", 00:12:24.855 "listen_addresses": [ 00:12:24.855 { 00:12:24.855 "trtype": "TCP", 00:12:24.855 "adrfam": "IPv4", 00:12:24.855 "traddr": "10.0.0.2", 00:12:24.855 "trsvcid": "4420" 00:12:24.855 } 00:12:24.855 ], 00:12:24.855 "allow_any_host": true, 00:12:24.855 "hosts": [], 00:12:24.855 "serial_number": "SPDK00000000000004", 00:12:24.855 "model_number": "SPDK bdev Controller", 00:12:24.855 "max_namespaces": 32, 00:12:24.855 "min_cntlid": 1, 00:12:24.855 "max_cntlid": 65519, 00:12:24.855 "namespaces": [ 00:12:24.855 { 00:12:24.855 "nsid": 1, 00:12:24.855 "bdev_name": "Null4", 00:12:24.855 "name": "Null4", 00:12:24.855 "nguid": "62D7A83EC1D344E5B31C4BE72D90D2DD", 00:12:24.855 "uuid": "62d7a83e-c1d3-44e5-b31c-4be72d90d2dd" 00:12:24.855 } 00:12:24.855 ] 00:12:24.855 } 00:12:24.855 ] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.855 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.856 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.856 rmmod nvme_tcp 00:12:25.114 rmmod nvme_fabrics 00:12:25.114 rmmod nvme_keyring 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2096798 ']' 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2096798 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2096798 ']' 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2096798 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2096798 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2096798' 00:12:25.114 killing process with pid 2096798 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2096798 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2096798 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.114 16:39:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.645 00:12:27.645 real 0m9.296s 00:12:27.645 user 0m7.307s 00:12:27.645 sys 0m4.484s 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:27.645 ************************************ 00:12:27.645 END TEST nvmf_target_discovery 00:12:27.645 ************************************ 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.645 ************************************ 00:12:27.645 START TEST nvmf_referrals 00:12:27.645 ************************************ 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:27.645 * Looking for test storage... 00:12:27.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:27.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.645 --rc genhtml_branch_coverage=1 00:12:27.645 --rc genhtml_function_coverage=1 00:12:27.645 --rc genhtml_legend=1 00:12:27.645 --rc geninfo_all_blocks=1 00:12:27.645 --rc geninfo_unexecuted_blocks=1 00:12:27.645 00:12:27.645 ' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:27.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.645 --rc genhtml_branch_coverage=1 00:12:27.645 --rc genhtml_function_coverage=1 00:12:27.645 --rc genhtml_legend=1 00:12:27.645 --rc geninfo_all_blocks=1 00:12:27.645 --rc geninfo_unexecuted_blocks=1 00:12:27.645 00:12:27.645 ' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:27.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.645 --rc genhtml_branch_coverage=1 00:12:27.645 --rc genhtml_function_coverage=1 00:12:27.645 --rc genhtml_legend=1 00:12:27.645 --rc geninfo_all_blocks=1 00:12:27.645 --rc geninfo_unexecuted_blocks=1 00:12:27.645 00:12:27.645 ' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:27.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.645 --rc genhtml_branch_coverage=1 00:12:27.645 --rc genhtml_function_coverage=1 00:12:27.645 --rc genhtml_legend=1 00:12:27.645 --rc geninfo_all_blocks=1 00:12:27.645 --rc geninfo_unexecuted_blocks=1 00:12:27.645 00:12:27.645 ' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.645 16:39:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.645 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.645 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.645 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.645 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.645 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.646 16:39:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:32.914 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:32.914 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:32.914 Found net devices under 0000:31:00.0: cvl_0_0 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.914 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:32.915 Found net devices under 0000:31:00.1: cvl_0_1 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:12:32.915 00:12:32.915 --- 10.0.0.2 ping statistics --- 00:12:32.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.915 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:12:32.915 00:12:32.915 --- 10.0.0.1 ping statistics --- 00:12:32.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.915 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2101480 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2101480 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2101480 ']' 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:32.915 16:39:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.915 [2024-12-06 16:39:21.564068] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:12:32.915 [2024-12-06 16:39:21.564146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.174 [2024-12-06 16:39:21.655560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.174 [2024-12-06 16:39:21.684703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.174 [2024-12-06 16:39:21.684753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.174 [2024-12-06 16:39:21.684763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.174 [2024-12-06 16:39:21.684770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.174 [2024-12-06 16:39:21.684776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.174 [2024-12-06 16:39:21.686699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.174 [2024-12-06 16:39:21.686868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.174 [2024-12-06 16:39:21.687039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.174 [2024-12-06 16:39:21.687041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 [2024-12-06 16:39:22.381480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 [2024-12-06 16:39:22.401443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.743 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:34.003 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.004 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:34.263 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:34.264 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.264 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.264 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.264 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:34.264 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.264 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:34.523 16:39:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:34.523 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:34.523 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:34.523 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:34.523 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:34.523 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:34.523 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:34.523 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:34.783 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:34.783 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:34.783 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:34.783 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:34.783 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:34.783 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.042 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:35.042 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:35.042 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.043 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.302 16:39:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:35.562 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.822 rmmod nvme_tcp 00:12:35.822 rmmod nvme_fabrics 00:12:35.822 rmmod nvme_keyring 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2101480 ']' 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2101480 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2101480 ']' 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2101480 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2101480 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2101480' 00:12:35.822 killing process with pid 2101480 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2101480 00:12:35.822 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2101480 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.082 16:39:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.988 00:12:37.988 real 0m10.746s 00:12:37.988 user 0m13.476s 00:12:37.988 sys 0m4.909s 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 ************************************ 00:12:37.988 END TEST nvmf_referrals 00:12:37.988 ************************************ 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.988 ************************************ 00:12:37.988 START TEST nvmf_connect_disconnect 00:12:37.988 ************************************ 00:12:37.988 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:38.249 * Looking for test storage... 00:12:38.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.249 --rc genhtml_branch_coverage=1 00:12:38.249 --rc genhtml_function_coverage=1 00:12:38.249 --rc genhtml_legend=1 00:12:38.249 --rc geninfo_all_blocks=1 00:12:38.249 --rc geninfo_unexecuted_blocks=1 00:12:38.249 00:12:38.249 ' 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.249 --rc genhtml_branch_coverage=1 00:12:38.249 --rc genhtml_function_coverage=1 00:12:38.249 --rc genhtml_legend=1 00:12:38.249 --rc geninfo_all_blocks=1 00:12:38.249 --rc geninfo_unexecuted_blocks=1 00:12:38.249 00:12:38.249 ' 00:12:38.249 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.250 --rc genhtml_branch_coverage=1 00:12:38.250 --rc genhtml_function_coverage=1 00:12:38.250 --rc genhtml_legend=1 00:12:38.250 --rc geninfo_all_blocks=1 00:12:38.250 --rc geninfo_unexecuted_blocks=1 00:12:38.250 00:12:38.250 ' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.250 --rc genhtml_branch_coverage=1 00:12:38.250 --rc genhtml_function_coverage=1 00:12:38.250 --rc genhtml_legend=1 00:12:38.250 --rc geninfo_all_blocks=1 00:12:38.250 --rc geninfo_unexecuted_blocks=1 00:12:38.250 00:12:38.250 ' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.250 16:39:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.536 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:43.537 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:43.537 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:43.537 Found net devices under 0000:31:00.0: cvl_0_0 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:43.537 Found net devices under 0000:31:00.1: cvl_0_1 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.537 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:12:43.798 00:12:43.798 --- 10.0.0.2 ping statistics --- 00:12:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.798 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:12:43.798 00:12:43.798 --- 10.0.0.1 ping statistics --- 00:12:43.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.798 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.798 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2106604 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2106604 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2106604 ']' 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:43.799 16:39:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:43.799 [2024-12-06 16:39:32.443199] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:12:43.799 [2024-12-06 16:39:32.443269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.059 [2024-12-06 16:39:32.533595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.059 [2024-12-06 16:39:32.561834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.059 [2024-12-06 16:39:32.561886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.059 [2024-12-06 16:39:32.561895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.059 [2024-12-06 16:39:32.561902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.059 [2024-12-06 16:39:32.561909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.059 [2024-12-06 16:39:32.564077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.059 [2024-12-06 16:39:32.564239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.059 [2024-12-06 16:39:32.564515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.059 [2024-12-06 16:39:32.564516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.628 [2024-12-06 16:39:33.256768] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:44.628 [2024-12-06 16:39:33.312589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:44.628 16:39:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:47.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.047 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.900 [2024-12-06 16:41:33.036017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c178b0 is same with the state(6) to be set 00:14:44.900 [2024-12-06 16:41:33.036058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c178b0 is same with the state(6) to be set 00:14:44.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.728 [2024-12-06 16:42:26.056070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17710 is same with the state(6) to be set 00:15:37.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:35.616 rmmod nvme_tcp 00:16:35.616 rmmod nvme_fabrics 00:16:35.616 rmmod nvme_keyring 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2106604 ']' 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2106604 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2106604 ']' 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2106604 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2106604 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2106604' 00:16:35.616 killing process with pid 2106604 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2106604 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2106604 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.616 16:43:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.521 00:16:37.521 real 3m59.377s 00:16:37.521 user 15m15.944s 00:16:37.521 sys 0m21.730s 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:37.521 ************************************ 00:16:37.521 END TEST nvmf_connect_disconnect 00:16:37.521 ************************************ 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.521 ************************************ 00:16:37.521 START TEST nvmf_multitarget 00:16:37.521 ************************************ 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:37.521 * Looking for test storage... 00:16:37.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.521 --rc genhtml_branch_coverage=1 00:16:37.521 --rc genhtml_function_coverage=1 00:16:37.521 --rc genhtml_legend=1 00:16:37.521 --rc geninfo_all_blocks=1 00:16:37.521 --rc geninfo_unexecuted_blocks=1 00:16:37.521 00:16:37.521 ' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.521 --rc genhtml_branch_coverage=1 00:16:37.521 --rc genhtml_function_coverage=1 00:16:37.521 --rc genhtml_legend=1 00:16:37.521 --rc geninfo_all_blocks=1 00:16:37.521 --rc geninfo_unexecuted_blocks=1 00:16:37.521 00:16:37.521 ' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.521 --rc genhtml_branch_coverage=1 00:16:37.521 --rc genhtml_function_coverage=1 00:16:37.521 --rc genhtml_legend=1 00:16:37.521 --rc geninfo_all_blocks=1 00:16:37.521 --rc geninfo_unexecuted_blocks=1 00:16:37.521 00:16:37.521 ' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:37.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.521 --rc genhtml_branch_coverage=1 00:16:37.521 --rc genhtml_function_coverage=1 00:16:37.521 --rc genhtml_legend=1 00:16:37.521 --rc geninfo_all_blocks=1 00:16:37.521 --rc geninfo_unexecuted_blocks=1 00:16:37.521 00:16:37.521 ' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.521 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.780 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:37.780 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:37.780 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.780 16:43:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:43.055 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:43.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:43.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:43.056 Found net devices under 0000:31:00.0: cvl_0_0 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:43.056 Found net devices under 0000:31:00.1: cvl_0_1 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:43.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:16:43.056 00:16:43.056 --- 10.0.0.2 ping statistics --- 00:16:43.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.056 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:16:43.056 00:16:43.056 --- 10.0.0.1 ping statistics --- 00:16:43.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.056 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2163300 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2163300 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2163300 ']' 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.056 16:43:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:43.056 [2024-12-06 16:43:31.633420] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:43.056 [2024-12-06 16:43:31.633486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.056 [2024-12-06 16:43:31.724274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.315 [2024-12-06 16:43:31.753576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.315 [2024-12-06 16:43:31.753630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.315 [2024-12-06 16:43:31.753640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.315 [2024-12-06 16:43:31.753654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.315 [2024-12-06 16:43:31.753660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.315 [2024-12-06 16:43:31.755913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.315 [2024-12-06 16:43:31.756058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.315 [2024-12-06 16:43:31.756192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.315 [2024-12-06 16:43:31.756191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:43.882 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:44.142 "nvmf_tgt_1" 00:16:44.142 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:44.142 "nvmf_tgt_2" 00:16:44.142 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:44.142 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:44.142 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:44.142 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:44.401 true 00:16:44.401 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:44.401 true 00:16:44.401 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:44.401 16:43:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.401 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.401 rmmod nvme_tcp 00:16:44.401 rmmod nvme_fabrics 00:16:44.401 rmmod nvme_keyring 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2163300 ']' 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2163300 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2163300 ']' 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2163300 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2163300 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2163300' 00:16:44.661 killing process with pid 2163300 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2163300 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2163300 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.661 16:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.674 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:46.674 00:16:46.674 real 0m9.240s 00:16:46.674 user 0m8.109s 00:16:46.674 sys 0m4.410s 00:16:46.674 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.674 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:46.674 ************************************ 00:16:46.674 END TEST nvmf_multitarget 00:16:46.674 ************************************ 00:16:46.674 16:43:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:46.674 16:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.674 16:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.674 16:43:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.972 ************************************ 00:16:46.972 START TEST nvmf_rpc 00:16:46.972 ************************************ 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:46.972 * Looking for test storage... 00:16:46.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.972 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:46.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.973 --rc genhtml_branch_coverage=1 00:16:46.973 --rc genhtml_function_coverage=1 00:16:46.973 --rc genhtml_legend=1 00:16:46.973 --rc geninfo_all_blocks=1 00:16:46.973 --rc geninfo_unexecuted_blocks=1 00:16:46.973 00:16:46.973 ' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:46.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.973 --rc genhtml_branch_coverage=1 00:16:46.973 --rc genhtml_function_coverage=1 00:16:46.973 --rc genhtml_legend=1 00:16:46.973 --rc geninfo_all_blocks=1 00:16:46.973 --rc geninfo_unexecuted_blocks=1 00:16:46.973 00:16:46.973 ' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:46.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.973 --rc genhtml_branch_coverage=1 00:16:46.973 --rc genhtml_function_coverage=1 00:16:46.973 --rc genhtml_legend=1 00:16:46.973 --rc geninfo_all_blocks=1 00:16:46.973 --rc geninfo_unexecuted_blocks=1 00:16:46.973 00:16:46.973 ' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:46.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.973 --rc genhtml_branch_coverage=1 00:16:46.973 --rc genhtml_function_coverage=1 00:16:46.973 --rc genhtml_legend=1 00:16:46.973 --rc geninfo_all_blocks=1 00:16:46.973 --rc geninfo_unexecuted_blocks=1 00:16:46.973 00:16:46.973 ' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:46.973 16:43:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:52.253 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:52.253 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:52.253 Found net devices under 0000:31:00.0: cvl_0_0 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.253 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:52.254 Found net devices under 0000:31:00.1: cvl_0_1 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.254 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.512 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.512 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.512 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:52.512 16:43:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:52.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:16:52.512 00:16:52.512 --- 10.0.0.2 ping statistics --- 00:16:52.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.512 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:16:52.512 00:16:52.512 --- 10.0.0.1 ping statistics --- 00:16:52.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.512 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:52.512 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2168020 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2168020 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2168020 ']' 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.513 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:52.513 [2024-12-06 16:43:41.124201] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:16:52.513 [2024-12-06 16:43:41.124252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.771 [2024-12-06 16:43:41.208728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.771 [2024-12-06 16:43:41.227302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.771 [2024-12-06 16:43:41.227337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.771 [2024-12-06 16:43:41.227345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.771 [2024-12-06 16:43:41.227352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.771 [2024-12-06 16:43:41.227358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.771 [2024-12-06 16:43:41.229000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.771 [2024-12-06 16:43:41.229155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.771 [2024-12-06 16:43:41.229530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.771 [2024-12-06 16:43:41.229531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:53.338 "tick_rate": 2400000000, 00:16:53.338 "poll_groups": [ 00:16:53.338 { 00:16:53.338 "name": "nvmf_tgt_poll_group_000", 00:16:53.338 "admin_qpairs": 0, 00:16:53.338 "io_qpairs": 0, 00:16:53.338 "current_admin_qpairs": 0, 00:16:53.338 "current_io_qpairs": 0, 00:16:53.338 "pending_bdev_io": 0, 00:16:53.338 "completed_nvme_io": 0, 00:16:53.338 "transports": [] 00:16:53.338 }, 00:16:53.338 { 00:16:53.338 "name": "nvmf_tgt_poll_group_001", 00:16:53.338 "admin_qpairs": 0, 00:16:53.338 "io_qpairs": 0, 00:16:53.338 "current_admin_qpairs": 0, 00:16:53.338 "current_io_qpairs": 0, 00:16:53.338 "pending_bdev_io": 0, 00:16:53.338 "completed_nvme_io": 0, 00:16:53.338 "transports": [] 00:16:53.338 }, 00:16:53.338 { 00:16:53.338 "name": "nvmf_tgt_poll_group_002", 00:16:53.338 "admin_qpairs": 0, 00:16:53.338 "io_qpairs": 0, 00:16:53.338 "current_admin_qpairs": 0, 00:16:53.338 "current_io_qpairs": 0, 00:16:53.338 "pending_bdev_io": 0, 00:16:53.338 "completed_nvme_io": 0, 00:16:53.338 "transports": [] 00:16:53.338 }, 00:16:53.338 { 00:16:53.338 "name": "nvmf_tgt_poll_group_003", 00:16:53.338 "admin_qpairs": 0, 00:16:53.338 "io_qpairs": 0, 00:16:53.338 "current_admin_qpairs": 0, 00:16:53.338 "current_io_qpairs": 0, 00:16:53.338 "pending_bdev_io": 0, 00:16:53.338 "completed_nvme_io": 0, 00:16:53.338 "transports": [] 00:16:53.338 } 00:16:53.338 ] 00:16:53.338 }' 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:53.338 16:43:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.338 [2024-12-06 16:43:42.010377] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.338 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:53.598 "tick_rate": 2400000000, 00:16:53.598 "poll_groups": [ 00:16:53.598 { 00:16:53.598 "name": "nvmf_tgt_poll_group_000", 00:16:53.598 "admin_qpairs": 0, 00:16:53.598 "io_qpairs": 0, 00:16:53.598 "current_admin_qpairs": 0, 00:16:53.598 "current_io_qpairs": 0, 00:16:53.598 "pending_bdev_io": 0, 00:16:53.598 "completed_nvme_io": 0, 00:16:53.598 "transports": [ 00:16:53.598 { 00:16:53.598 "trtype": "TCP" 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "name": "nvmf_tgt_poll_group_001", 00:16:53.598 "admin_qpairs": 0, 00:16:53.598 "io_qpairs": 0, 00:16:53.598 "current_admin_qpairs": 0, 00:16:53.598 "current_io_qpairs": 0, 00:16:53.598 "pending_bdev_io": 0, 00:16:53.598 "completed_nvme_io": 0, 00:16:53.598 "transports": [ 00:16:53.598 { 00:16:53.598 "trtype": "TCP" 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "name": "nvmf_tgt_poll_group_002", 00:16:53.598 "admin_qpairs": 0, 00:16:53.598 "io_qpairs": 0, 00:16:53.598 "current_admin_qpairs": 0, 00:16:53.598 "current_io_qpairs": 0, 00:16:53.598 "pending_bdev_io": 0, 00:16:53.598 "completed_nvme_io": 0, 00:16:53.598 "transports": [ 00:16:53.598 { 00:16:53.598 "trtype": "TCP" 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }, 00:16:53.598 { 00:16:53.598 "name": "nvmf_tgt_poll_group_003", 00:16:53.598 "admin_qpairs": 0, 00:16:53.598 "io_qpairs": 0, 00:16:53.598 "current_admin_qpairs": 0, 00:16:53.598 "current_io_qpairs": 0, 00:16:53.598 "pending_bdev_io": 0, 00:16:53.598 "completed_nvme_io": 0, 00:16:53.598 "transports": [ 00:16:53.598 { 00:16:53.598 "trtype": "TCP" 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 } 00:16:53.598 ] 00:16:53.598 }' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.598 Malloc1 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.598 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.599 [2024-12-06 16:43:42.143781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:16:53.599 [2024-12-06 16:43:42.166522] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:16:53.599 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:53.599 could not add new controller: failed to write to nvme-fabrics device 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.599 16:43:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.518 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.518 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:55.518 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.518 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:55.518 16:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.424 [2024-12-06 16:43:45.872127] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:16:57.424 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:57.424 could not add new controller: failed to write to nvme-fabrics device 00:16:57.424 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.425 16:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.807 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.807 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:58.807 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.807 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:58.807 16:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:00.716 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:00.716 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:00.716 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.716 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:00.716 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.716 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:00.716 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.976 [2024-12-06 16:43:49.546394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.976 16:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.882 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.882 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:02.882 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.882 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:02.882 16:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 [2024-12-06 16:43:53.236637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.789 16:43:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:06.166 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:06.166 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:06.166 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:06.166 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:06.166 16:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:08.696 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:08.696 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:08.696 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:08.696 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:08.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.697 [2024-12-06 16:43:56.927434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.697 16:43:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.071 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:10.071 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:10.071 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.071 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:10.071 16:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:11.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:11.970 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 [2024-12-06 16:44:00.536525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.971 16:44:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:13.346 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:13.346 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:13.346 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:13.346 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:13.346 16:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 [2024-12-06 16:44:04.193886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.886 16:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.265 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.265 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:17.265 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.265 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:17.265 16:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:19.169 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:19.169 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:19.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 [2024-12-06 16:44:07.817425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.170 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.431 [2024-12-06 16:44:07.865504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.431 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 [2024-12-06 16:44:07.913641] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 [2024-12-06 16:44:07.961769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 [2024-12-06 16:44:08.009921] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.432 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:19.433 "tick_rate": 2400000000, 00:17:19.433 "poll_groups": [ 00:17:19.433 { 00:17:19.433 "name": "nvmf_tgt_poll_group_000", 00:17:19.433 "admin_qpairs": 0, 00:17:19.433 "io_qpairs": 224, 00:17:19.433 "current_admin_qpairs": 0, 00:17:19.433 "current_io_qpairs": 0, 00:17:19.433 "pending_bdev_io": 0, 00:17:19.433 "completed_nvme_io": 521, 00:17:19.433 "transports": [ 00:17:19.433 { 00:17:19.433 "trtype": "TCP" 00:17:19.433 } 00:17:19.433 ] 00:17:19.433 }, 00:17:19.433 { 00:17:19.433 "name": "nvmf_tgt_poll_group_001", 00:17:19.433 "admin_qpairs": 1, 00:17:19.433 "io_qpairs": 223, 00:17:19.433 "current_admin_qpairs": 0, 00:17:19.433 "current_io_qpairs": 0, 00:17:19.433 "pending_bdev_io": 0, 00:17:19.433 "completed_nvme_io": 226, 00:17:19.433 "transports": [ 00:17:19.433 { 00:17:19.433 "trtype": "TCP" 00:17:19.433 } 00:17:19.433 ] 00:17:19.433 }, 00:17:19.433 { 00:17:19.433 "name": "nvmf_tgt_poll_group_002", 00:17:19.433 "admin_qpairs": 6, 00:17:19.433 "io_qpairs": 218, 00:17:19.433 "current_admin_qpairs": 0, 00:17:19.433 "current_io_qpairs": 0, 00:17:19.433 "pending_bdev_io": 0, 00:17:19.433 "completed_nvme_io": 218, 00:17:19.433 "transports": [ 00:17:19.433 { 00:17:19.433 "trtype": "TCP" 00:17:19.433 } 00:17:19.433 ] 00:17:19.433 }, 00:17:19.433 { 00:17:19.433 "name": "nvmf_tgt_poll_group_003", 00:17:19.433 "admin_qpairs": 0, 00:17:19.433 "io_qpairs": 224, 00:17:19.433 "current_admin_qpairs": 0, 00:17:19.433 "current_io_qpairs": 0, 00:17:19.433 "pending_bdev_io": 0, 00:17:19.433 "completed_nvme_io": 274, 00:17:19.433 "transports": [ 00:17:19.433 { 00:17:19.433 "trtype": "TCP" 00:17:19.433 } 00:17:19.433 ] 00:17:19.433 } 00:17:19.433 ] 00:17:19.433 }' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:19.433 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:19.694 rmmod nvme_tcp 00:17:19.694 rmmod nvme_fabrics 00:17:19.694 rmmod nvme_keyring 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2168020 ']' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2168020 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2168020 ']' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2168020 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2168020 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2168020' 00:17:19.694 killing process with pid 2168020 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2168020 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2168020 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.694 16:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:22.234 00:17:22.234 real 0m35.040s 00:17:22.234 user 1m49.455s 00:17:22.234 sys 0m6.023s 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.234 ************************************ 00:17:22.234 END TEST nvmf_rpc 00:17:22.234 ************************************ 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.234 ************************************ 00:17:22.234 START TEST nvmf_invalid 00:17:22.234 ************************************ 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:22.234 * Looking for test storage... 00:17:22.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.234 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:22.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.235 --rc genhtml_branch_coverage=1 00:17:22.235 --rc genhtml_function_coverage=1 00:17:22.235 --rc genhtml_legend=1 00:17:22.235 --rc geninfo_all_blocks=1 00:17:22.235 --rc geninfo_unexecuted_blocks=1 00:17:22.235 00:17:22.235 ' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:22.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.235 --rc genhtml_branch_coverage=1 00:17:22.235 --rc genhtml_function_coverage=1 00:17:22.235 --rc genhtml_legend=1 00:17:22.235 --rc geninfo_all_blocks=1 00:17:22.235 --rc geninfo_unexecuted_blocks=1 00:17:22.235 00:17:22.235 ' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:22.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.235 --rc genhtml_branch_coverage=1 00:17:22.235 --rc genhtml_function_coverage=1 00:17:22.235 --rc genhtml_legend=1 00:17:22.235 --rc geninfo_all_blocks=1 00:17:22.235 --rc geninfo_unexecuted_blocks=1 00:17:22.235 00:17:22.235 ' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:22.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.235 --rc genhtml_branch_coverage=1 00:17:22.235 --rc genhtml_function_coverage=1 00:17:22.235 --rc genhtml_legend=1 00:17:22.235 --rc geninfo_all_blocks=1 00:17:22.235 --rc geninfo_unexecuted_blocks=1 00:17:22.235 00:17:22.235 ' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.235 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:22.236 16:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:27.521 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:27.521 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:27.521 Found net devices under 0000:31:00.0: cvl_0_0 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:27.521 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:27.521 Found net devices under 0000:31:00.1: cvl_0_1 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:27.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:17:27.522 00:17:27.522 --- 10.0.0.2 ping statistics --- 00:17:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.522 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:27.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:17:27.522 00:17:27.522 --- 10.0.0.1 ping statistics --- 00:17:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.522 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2178406 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2178406 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2178406 ']' 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:27.522 16:44:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.522 [2024-12-06 16:44:15.951080] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:27.522 [2024-12-06 16:44:15.951159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.522 [2024-12-06 16:44:16.044964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.522 [2024-12-06 16:44:16.073668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.522 [2024-12-06 16:44:16.073722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.522 [2024-12-06 16:44:16.073730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.522 [2024-12-06 16:44:16.073738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.522 [2024-12-06 16:44:16.073745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.522 [2024-12-06 16:44:16.075725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.522 [2024-12-06 16:44:16.075886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.522 [2024-12-06 16:44:16.076047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.522 [2024-12-06 16:44:16.076049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:28.091 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7394 00:17:28.351 [2024-12-06 16:44:16.910021] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:28.351 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:28.351 { 00:17:28.351 "nqn": "nqn.2016-06.io.spdk:cnode7394", 00:17:28.351 "tgt_name": "foobar", 00:17:28.351 "method": "nvmf_create_subsystem", 00:17:28.351 "req_id": 1 00:17:28.351 } 00:17:28.351 Got JSON-RPC error response 00:17:28.351 response: 00:17:28.351 { 00:17:28.351 "code": -32603, 00:17:28.351 "message": "Unable to find target foobar" 00:17:28.351 }' 00:17:28.351 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:28.351 { 00:17:28.351 "nqn": "nqn.2016-06.io.spdk:cnode7394", 00:17:28.351 "tgt_name": "foobar", 00:17:28.351 "method": "nvmf_create_subsystem", 00:17:28.351 "req_id": 1 00:17:28.351 } 00:17:28.351 Got JSON-RPC error response 00:17:28.351 response: 00:17:28.351 { 00:17:28.351 "code": -32603, 00:17:28.351 "message": "Unable to find target foobar" 00:17:28.351 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:28.351 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:28.351 16:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6849 00:17:28.610 [2024-12-06 16:44:17.094751] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6849: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:28.610 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:28.610 { 00:17:28.610 "nqn": "nqn.2016-06.io.spdk:cnode6849", 00:17:28.610 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:28.610 "method": "nvmf_create_subsystem", 00:17:28.610 "req_id": 1 00:17:28.610 } 00:17:28.610 Got JSON-RPC error response 00:17:28.610 response: 00:17:28.610 { 00:17:28.610 "code": -32602, 00:17:28.610 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:28.611 }' 00:17:28.611 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:28.611 { 00:17:28.611 "nqn": "nqn.2016-06.io.spdk:cnode6849", 00:17:28.611 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:28.611 "method": "nvmf_create_subsystem", 00:17:28.611 "req_id": 1 00:17:28.611 } 00:17:28.611 Got JSON-RPC error response 00:17:28.611 response: 00:17:28.611 { 00:17:28.611 "code": -32602, 00:17:28.611 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:28.611 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:28.611 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:28.611 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28561 00:17:28.611 [2024-12-06 16:44:17.275343] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28561: invalid model number 'SPDK_Controller' 00:17:28.611 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:28.611 { 00:17:28.611 "nqn": "nqn.2016-06.io.spdk:cnode28561", 00:17:28.611 "model_number": "SPDK_Controller\u001f", 00:17:28.611 "method": "nvmf_create_subsystem", 00:17:28.611 "req_id": 1 00:17:28.611 } 00:17:28.611 Got JSON-RPC error response 00:17:28.611 response: 00:17:28.611 { 00:17:28.611 "code": -32602, 00:17:28.611 "message": "Invalid MN SPDK_Controller\u001f" 00:17:28.611 }' 00:17:28.611 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:28.611 { 00:17:28.611 "nqn": "nqn.2016-06.io.spdk:cnode28561", 00:17:28.611 "model_number": "SPDK_Controller\u001f", 00:17:28.611 "method": "nvmf_create_subsystem", 00:17:28.611 "req_id": 1 00:17:28.611 } 00:17:28.611 Got JSON-RPC error response 00:17:28.611 response: 00:17:28.611 { 00:17:28.611 "code": -32602, 00:17:28.611 "message": "Invalid MN SPDK_Controller\u001f" 00:17:28.611 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.872 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:28.873 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:28.884 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '6V8k*H]W]k|0%Ezj|e~$v' 00:17:28.885 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '6V8k*H]W]k|0%Ezj|e~$v' nqn.2016-06.io.spdk:cnode16082 00:17:29.147 [2024-12-06 16:44:17.568441] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16082: invalid serial number '6V8k*H]W]k|0%Ezj|e~$v' 00:17:29.147 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:29.147 { 00:17:29.147 "nqn": "nqn.2016-06.io.spdk:cnode16082", 00:17:29.147 "serial_number": "6V8k*H]W]k|0%Ezj|e~$v", 00:17:29.147 "method": "nvmf_create_subsystem", 00:17:29.147 "req_id": 1 00:17:29.147 } 00:17:29.147 Got JSON-RPC error response 00:17:29.147 response: 00:17:29.147 { 00:17:29.147 "code": -32602, 00:17:29.147 "message": "Invalid SN 6V8k*H]W]k|0%Ezj|e~$v" 00:17:29.147 }' 00:17:29.147 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:29.147 { 00:17:29.147 "nqn": "nqn.2016-06.io.spdk:cnode16082", 00:17:29.147 "serial_number": "6V8k*H]W]k|0%Ezj|e~$v", 00:17:29.147 "method": "nvmf_create_subsystem", 00:17:29.147 "req_id": 1 00:17:29.147 } 00:17:29.147 Got JSON-RPC error response 00:17:29.147 response: 00:17:29.147 { 00:17:29.147 "code": -32602, 00:17:29.147 "message": "Invalid SN 6V8k*H]W]k|0%Ezj|e~$v" 00:17:29.147 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:29.147 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:29.147 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:29.147 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:29.148 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'v,-8Xh&qzm=aaOANt9l#W~qN$>bc$N7Z/[FUiU9Er' 00:17:29.149 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'v,-8Xh&qzm=aaOANt9l#W~qN$>bc$N7Z/[FUiU9Er' nqn.2016-06.io.spdk:cnode20782 00:17:29.410 [2024-12-06 16:44:17.957850] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20782: invalid model number 'v,-8Xh&qzm=aaOANt9l#W~qN$>bc$N7Z/[FUiU9Er' 00:17:29.410 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:29.410 { 00:17:29.410 "nqn": "nqn.2016-06.io.spdk:cnode20782", 00:17:29.410 "model_number": "v,-8Xh&qzm=aaOANt9l#W~qN$>bc$N7Z/[FUiU9Er", 00:17:29.410 "method": "nvmf_create_subsystem", 00:17:29.410 "req_id": 1 00:17:29.410 } 00:17:29.410 Got JSON-RPC error response 00:17:29.410 response: 00:17:29.410 { 00:17:29.410 "code": -32602, 00:17:29.410 "message": "Invalid MN v,-8Xh&qzm=aaOANt9l#W~qN$>bc$N7Z/[FUiU9Er" 00:17:29.410 }' 00:17:29.410 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:29.410 { 00:17:29.410 "nqn": "nqn.2016-06.io.spdk:cnode20782", 00:17:29.410 "model_number": "v,-8Xh&qzm=aaOANt9l#W~qN$>bc$N7Z/[FUiU9Er", 00:17:29.410 "method": "nvmf_create_subsystem", 00:17:29.410 "req_id": 1 00:17:29.410 } 00:17:29.410 Got JSON-RPC error response 00:17:29.410 response: 00:17:29.410 { 00:17:29.410 "code": -32602, 00:17:29.410 "message": "Invalid MN v,-8Xh&qzm=aaOANt9l#W~qN$>bc$N7Z/[FUiU9Er" 00:17:29.410 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:29.410 16:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:29.672 [2024-12-06 16:44:18.118467] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.672 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:29.672 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:29.672 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:29.672 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:29.672 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:29.672 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:29.931 [2024-12-06 16:44:18.447475] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:29.931 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:29.931 { 00:17:29.931 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:29.931 "listen_address": { 00:17:29.931 "trtype": "tcp", 00:17:29.931 "traddr": "", 00:17:29.931 "trsvcid": "4421" 00:17:29.931 }, 00:17:29.931 "method": "nvmf_subsystem_remove_listener", 00:17:29.931 "req_id": 1 00:17:29.931 } 00:17:29.931 Got JSON-RPC error response 00:17:29.931 response: 00:17:29.931 { 00:17:29.931 "code": -32602, 00:17:29.931 "message": "Invalid parameters" 00:17:29.931 }' 00:17:29.931 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:29.931 { 00:17:29.931 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:29.931 "listen_address": { 00:17:29.931 "trtype": "tcp", 00:17:29.931 "traddr": "", 00:17:29.931 "trsvcid": "4421" 00:17:29.931 }, 00:17:29.931 "method": "nvmf_subsystem_remove_listener", 00:17:29.931 "req_id": 1 00:17:29.931 } 00:17:29.931 Got JSON-RPC error response 00:17:29.931 response: 00:17:29.931 { 00:17:29.931 "code": -32602, 00:17:29.931 "message": "Invalid parameters" 00:17:29.931 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:29.931 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31536 -i 0 00:17:29.931 [2024-12-06 16:44:18.607939] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31536: invalid cntlid range [0-65519] 00:17:30.191 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:30.191 { 00:17:30.191 "nqn": "nqn.2016-06.io.spdk:cnode31536", 00:17:30.191 "min_cntlid": 0, 00:17:30.191 "method": "nvmf_create_subsystem", 00:17:30.191 "req_id": 1 00:17:30.191 } 00:17:30.191 Got JSON-RPC error response 00:17:30.191 response: 00:17:30.191 { 00:17:30.191 "code": -32602, 00:17:30.191 "message": "Invalid cntlid range [0-65519]" 00:17:30.191 }' 00:17:30.191 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:30.191 { 00:17:30.191 "nqn": "nqn.2016-06.io.spdk:cnode31536", 00:17:30.191 "min_cntlid": 0, 00:17:30.191 "method": "nvmf_create_subsystem", 00:17:30.191 "req_id": 1 00:17:30.191 } 00:17:30.191 Got JSON-RPC error response 00:17:30.191 response: 00:17:30.191 { 00:17:30.191 "code": -32602, 00:17:30.191 "message": "Invalid cntlid range [0-65519]" 00:17:30.191 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.191 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14071 -i 65520 00:17:30.191 [2024-12-06 16:44:18.768428] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14071: invalid cntlid range [65520-65519] 00:17:30.191 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:30.191 { 00:17:30.191 "nqn": "nqn.2016-06.io.spdk:cnode14071", 00:17:30.191 "min_cntlid": 65520, 00:17:30.191 "method": "nvmf_create_subsystem", 00:17:30.191 "req_id": 1 00:17:30.191 } 00:17:30.191 Got JSON-RPC error response 00:17:30.191 response: 00:17:30.191 { 00:17:30.191 "code": -32602, 00:17:30.191 "message": "Invalid cntlid range [65520-65519]" 00:17:30.191 }' 00:17:30.191 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:30.191 { 00:17:30.191 "nqn": "nqn.2016-06.io.spdk:cnode14071", 00:17:30.191 "min_cntlid": 65520, 00:17:30.191 "method": "nvmf_create_subsystem", 00:17:30.191 "req_id": 1 00:17:30.191 } 00:17:30.191 Got JSON-RPC error response 00:17:30.191 response: 00:17:30.191 { 00:17:30.191 "code": -32602, 00:17:30.191 "message": "Invalid cntlid range [65520-65519]" 00:17:30.191 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.191 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9962 -I 0 00:17:30.450 [2024-12-06 16:44:18.928928] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9962: invalid cntlid range [1-0] 00:17:30.450 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:30.450 { 00:17:30.450 "nqn": "nqn.2016-06.io.spdk:cnode9962", 00:17:30.450 "max_cntlid": 0, 00:17:30.450 "method": "nvmf_create_subsystem", 00:17:30.450 "req_id": 1 00:17:30.450 } 00:17:30.450 Got JSON-RPC error response 00:17:30.450 response: 00:17:30.450 { 00:17:30.450 "code": -32602, 00:17:30.450 "message": "Invalid cntlid range [1-0]" 00:17:30.450 }' 00:17:30.450 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:30.450 { 00:17:30.450 "nqn": "nqn.2016-06.io.spdk:cnode9962", 00:17:30.450 "max_cntlid": 0, 00:17:30.450 "method": "nvmf_create_subsystem", 00:17:30.450 "req_id": 1 00:17:30.450 } 00:17:30.450 Got JSON-RPC error response 00:17:30.450 response: 00:17:30.450 { 00:17:30.450 "code": -32602, 00:17:30.450 "message": "Invalid cntlid range [1-0]" 00:17:30.450 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.450 16:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20232 -I 65520 00:17:30.450 [2024-12-06 16:44:19.089459] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20232: invalid cntlid range [1-65520] 00:17:30.450 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:30.450 { 00:17:30.450 "nqn": "nqn.2016-06.io.spdk:cnode20232", 00:17:30.450 "max_cntlid": 65520, 00:17:30.450 "method": "nvmf_create_subsystem", 00:17:30.450 "req_id": 1 00:17:30.450 } 00:17:30.450 Got JSON-RPC error response 00:17:30.450 response: 00:17:30.450 { 00:17:30.450 "code": -32602, 00:17:30.451 "message": "Invalid cntlid range [1-65520]" 00:17:30.451 }' 00:17:30.451 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:30.451 { 00:17:30.451 "nqn": "nqn.2016-06.io.spdk:cnode20232", 00:17:30.451 "max_cntlid": 65520, 00:17:30.451 "method": "nvmf_create_subsystem", 00:17:30.451 "req_id": 1 00:17:30.451 } 00:17:30.451 Got JSON-RPC error response 00:17:30.451 response: 00:17:30.451 { 00:17:30.451 "code": -32602, 00:17:30.451 "message": "Invalid cntlid range [1-65520]" 00:17:30.451 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.451 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30332 -i 6 -I 5 00:17:30.710 [2024-12-06 16:44:19.249959] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30332: invalid cntlid range [6-5] 00:17:30.710 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:30.710 { 00:17:30.710 "nqn": "nqn.2016-06.io.spdk:cnode30332", 00:17:30.710 "min_cntlid": 6, 00:17:30.710 "max_cntlid": 5, 00:17:30.710 "method": "nvmf_create_subsystem", 00:17:30.710 "req_id": 1 00:17:30.711 } 00:17:30.711 Got JSON-RPC error response 00:17:30.711 response: 00:17:30.711 { 00:17:30.711 "code": -32602, 00:17:30.711 "message": "Invalid cntlid range [6-5]" 00:17:30.711 }' 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:30.711 { 00:17:30.711 "nqn": "nqn.2016-06.io.spdk:cnode30332", 00:17:30.711 "min_cntlid": 6, 00:17:30.711 "max_cntlid": 5, 00:17:30.711 "method": "nvmf_create_subsystem", 00:17:30.711 "req_id": 1 00:17:30.711 } 00:17:30.711 Got JSON-RPC error response 00:17:30.711 response: 00:17:30.711 { 00:17:30.711 "code": -32602, 00:17:30.711 "message": "Invalid cntlid range [6-5]" 00:17:30.711 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:30.711 { 00:17:30.711 "name": "foobar", 00:17:30.711 "method": "nvmf_delete_target", 00:17:30.711 "req_id": 1 00:17:30.711 } 00:17:30.711 Got JSON-RPC error response 00:17:30.711 response: 00:17:30.711 { 00:17:30.711 "code": -32602, 00:17:30.711 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:30.711 }' 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:30.711 { 00:17:30.711 "name": "foobar", 00:17:30.711 "method": "nvmf_delete_target", 00:17:30.711 "req_id": 1 00:17:30.711 } 00:17:30.711 Got JSON-RPC error response 00:17:30.711 response: 00:17:30.711 { 00:17:30.711 "code": -32602, 00:17:30.711 "message": "The specified target doesn't exist, cannot delete it." 00:17:30.711 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.711 rmmod nvme_tcp 00:17:30.711 rmmod nvme_fabrics 00:17:30.711 rmmod nvme_keyring 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2178406 ']' 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2178406 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2178406 ']' 00:17:30.711 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2178406 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178406 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178406' 00:17:30.971 killing process with pid 2178406 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2178406 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2178406 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.971 16:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:33.566 00:17:33.566 real 0m11.157s 00:17:33.566 user 0m17.411s 00:17:33.566 sys 0m4.764s 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:33.566 ************************************ 00:17:33.566 END TEST nvmf_invalid 00:17:33.566 ************************************ 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.566 ************************************ 00:17:33.566 START TEST nvmf_connect_stress 00:17:33.566 ************************************ 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:33.566 * Looking for test storage... 00:17:33.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.566 --rc genhtml_branch_coverage=1 00:17:33.566 --rc genhtml_function_coverage=1 00:17:33.566 --rc genhtml_legend=1 00:17:33.566 --rc geninfo_all_blocks=1 00:17:33.566 --rc geninfo_unexecuted_blocks=1 00:17:33.566 00:17:33.566 ' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.566 --rc genhtml_branch_coverage=1 00:17:33.566 --rc genhtml_function_coverage=1 00:17:33.566 --rc genhtml_legend=1 00:17:33.566 --rc geninfo_all_blocks=1 00:17:33.566 --rc geninfo_unexecuted_blocks=1 00:17:33.566 00:17:33.566 ' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.566 --rc genhtml_branch_coverage=1 00:17:33.566 --rc genhtml_function_coverage=1 00:17:33.566 --rc genhtml_legend=1 00:17:33.566 --rc geninfo_all_blocks=1 00:17:33.566 --rc geninfo_unexecuted_blocks=1 00:17:33.566 00:17:33.566 ' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.566 --rc genhtml_branch_coverage=1 00:17:33.566 --rc genhtml_function_coverage=1 00:17:33.566 --rc genhtml_legend=1 00:17:33.566 --rc geninfo_all_blocks=1 00:17:33.566 --rc geninfo_unexecuted_blocks=1 00:17:33.566 00:17:33.566 ' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.566 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.567 16:44:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:38.841 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:38.841 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.841 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:38.842 Found net devices under 0000:31:00.0: cvl_0_0 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:38.842 Found net devices under 0000:31:00.1: cvl_0_1 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.842 16:44:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:17:38.842 00:17:38.842 --- 10.0.0.2 ping statistics --- 00:17:38.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.842 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:17:38.842 00:17:38.842 --- 10.0.0.1 ping statistics --- 00:17:38.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.842 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2183713 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2183713 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2183713 ']' 00:17:38.842 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.843 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.843 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.843 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.843 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.843 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:38.843 [2024-12-06 16:44:27.095017] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:38.843 [2024-12-06 16:44:27.095068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.843 [2024-12-06 16:44:27.182945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:38.843 [2024-12-06 16:44:27.202328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.843 [2024-12-06 16:44:27.202364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.843 [2024-12-06 16:44:27.202372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.843 [2024-12-06 16:44:27.202379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.843 [2024-12-06 16:44:27.202385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.843 [2024-12-06 16:44:27.203860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.843 [2024-12-06 16:44:27.204010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.843 [2024-12-06 16:44:27.204011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.413 [2024-12-06 16:44:27.930138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.413 [2024-12-06 16:44:27.947857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.413 NULL1 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2183760 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.413 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:39.674 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.674 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:39.674 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:39.674 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.674 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.244 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.244 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:40.244 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.244 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.244 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.505 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.505 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:40.505 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.505 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.505 16:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.765 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.765 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:40.765 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.765 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.765 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.024 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.024 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:41.024 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.024 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.024 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.282 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.282 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:41.282 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.282 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.282 16:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.854 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.854 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:41.854 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.854 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.854 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.166 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.166 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:42.166 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.166 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.166 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.504 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.504 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:42.504 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.504 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.504 16:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.766 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.766 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:42.766 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.766 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.766 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.025 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.025 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:43.025 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.025 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.025 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.285 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.285 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:43.285 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.285 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.285 16:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.544 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.544 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:43.544 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.544 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.544 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.113 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.113 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:44.113 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.113 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.113 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.373 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.373 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:44.373 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.373 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.373 16:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.633 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.633 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:44.633 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.633 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.633 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.893 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.893 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:44.893 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.893 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.893 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.153 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.153 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:45.153 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.153 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.153 16:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.722 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.722 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:45.722 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.722 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.722 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.982 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.982 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:45.982 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.982 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.982 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.241 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.241 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:46.241 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.241 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.241 16:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.501 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.501 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:46.501 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.501 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.501 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.760 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.760 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:46.760 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.760 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.760 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.328 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.328 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:47.328 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.328 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.328 16:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.588 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.588 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:47.588 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.588 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.588 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.847 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.847 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:47.847 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.847 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.847 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.107 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.107 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:48.107 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.107 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.107 16:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.364 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.364 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:48.364 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.364 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.364 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.932 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.932 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:48.932 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.932 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.932 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.192 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.192 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:49.192 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.192 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.192 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.451 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.451 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:49.451 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.451 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.451 16:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.451 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2183760 00:17:49.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2183760) - No such process 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2183760 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.710 rmmod nvme_tcp 00:17:49.710 rmmod nvme_fabrics 00:17:49.710 rmmod nvme_keyring 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2183713 ']' 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2183713 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2183713 ']' 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2183713 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.710 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183713 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183713' 00:17:49.969 killing process with pid 2183713 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2183713 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2183713 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.969 16:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.871 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:51.871 00:17:51.871 real 0m18.912s 00:17:51.871 user 0m41.989s 00:17:51.871 sys 0m7.253s 00:17:51.871 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:51.871 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.871 ************************************ 00:17:51.871 END TEST nvmf_connect_stress 00:17:51.871 ************************************ 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.131 ************************************ 00:17:52.131 START TEST nvmf_fused_ordering 00:17:52.131 ************************************ 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:52.131 * Looking for test storage... 00:17:52.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.131 --rc genhtml_branch_coverage=1 00:17:52.131 --rc genhtml_function_coverage=1 00:17:52.131 --rc genhtml_legend=1 00:17:52.131 --rc geninfo_all_blocks=1 00:17:52.131 --rc geninfo_unexecuted_blocks=1 00:17:52.131 00:17:52.131 ' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.131 --rc genhtml_branch_coverage=1 00:17:52.131 --rc genhtml_function_coverage=1 00:17:52.131 --rc genhtml_legend=1 00:17:52.131 --rc geninfo_all_blocks=1 00:17:52.131 --rc geninfo_unexecuted_blocks=1 00:17:52.131 00:17:52.131 ' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.131 --rc genhtml_branch_coverage=1 00:17:52.131 --rc genhtml_function_coverage=1 00:17:52.131 --rc genhtml_legend=1 00:17:52.131 --rc geninfo_all_blocks=1 00:17:52.131 --rc geninfo_unexecuted_blocks=1 00:17:52.131 00:17:52.131 ' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.131 --rc genhtml_branch_coverage=1 00:17:52.131 --rc genhtml_function_coverage=1 00:17:52.131 --rc genhtml_legend=1 00:17:52.131 --rc geninfo_all_blocks=1 00:17:52.131 --rc geninfo_unexecuted_blocks=1 00:17:52.131 00:17:52.131 ' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.131 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.132 16:44:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:58.711 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:58.711 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:58.711 Found net devices under 0000:31:00.0: cvl_0_0 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:58.711 Found net devices under 0000:31:00.1: cvl_0_1 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:58.711 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:58.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:17:58.712 00:17:58.712 --- 10.0.0.2 ping statistics --- 00:17:58.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.712 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:17:58.712 00:17:58.712 --- 10.0.0.1 ping statistics --- 00:17:58.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.712 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2190452 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2190452 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2190452 ']' 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 [2024-12-06 16:44:46.541780] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:58.712 [2024-12-06 16:44:46.541844] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.712 [2024-12-06 16:44:46.620817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.712 [2024-12-06 16:44:46.640998] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.712 [2024-12-06 16:44:46.641038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.712 [2024-12-06 16:44:46.641046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.712 [2024-12-06 16:44:46.641052] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.712 [2024-12-06 16:44:46.641057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.712 [2024-12-06 16:44:46.641668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 [2024-12-06 16:44:46.747877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 [2024-12-06 16:44:46.764060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 NULL1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.713 16:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:58.713 [2024-12-06 16:44:46.805211] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:17:58.713 [2024-12-06 16:44:46.805239] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190641 ] 00:17:58.713 Attached to nqn.2016-06.io.spdk:cnode1 00:17:58.713 Namespace ID: 1 size: 1GB 00:17:58.713 fused_ordering(0) 00:17:58.713 fused_ordering(1) 00:17:58.713 fused_ordering(2) 00:17:58.713 fused_ordering(3) 00:17:58.713 fused_ordering(4) 00:17:58.713 fused_ordering(5) 00:17:58.713 fused_ordering(6) 00:17:58.713 fused_ordering(7) 00:17:58.713 fused_ordering(8) 00:17:58.713 fused_ordering(9) 00:17:58.713 fused_ordering(10) 00:17:58.713 fused_ordering(11) 00:17:58.713 fused_ordering(12) 00:17:58.713 fused_ordering(13) 00:17:58.713 fused_ordering(14) 00:17:58.713 fused_ordering(15) 00:17:58.713 fused_ordering(16) 00:17:58.713 fused_ordering(17) 00:17:58.713 fused_ordering(18) 00:17:58.713 fused_ordering(19) 00:17:58.713 fused_ordering(20) 00:17:58.713 fused_ordering(21) 00:17:58.713 fused_ordering(22) 00:17:58.713 fused_ordering(23) 00:17:58.713 fused_ordering(24) 00:17:58.713 fused_ordering(25) 00:17:58.713 fused_ordering(26) 00:17:58.713 fused_ordering(27) 00:17:58.713 fused_ordering(28) 00:17:58.713 fused_ordering(29) 00:17:58.713 fused_ordering(30) 00:17:58.713 fused_ordering(31) 00:17:58.713 fused_ordering(32) 00:17:58.713 fused_ordering(33) 00:17:58.713 fused_ordering(34) 00:17:58.713 fused_ordering(35) 00:17:58.713 fused_ordering(36) 00:17:58.713 fused_ordering(37) 00:17:58.713 fused_ordering(38) 00:17:58.713 fused_ordering(39) 00:17:58.713 fused_ordering(40) 00:17:58.713 fused_ordering(41) 00:17:58.713 fused_ordering(42) 00:17:58.713 fused_ordering(43) 00:17:58.713 fused_ordering(44) 00:17:58.713 fused_ordering(45) 00:17:58.713 fused_ordering(46) 00:17:58.713 fused_ordering(47) 00:17:58.713 fused_ordering(48) 00:17:58.713 fused_ordering(49) 00:17:58.713 fused_ordering(50) 00:17:58.713 fused_ordering(51) 00:17:58.713 fused_ordering(52) 00:17:58.713 fused_ordering(53) 00:17:58.713 fused_ordering(54) 00:17:58.713 fused_ordering(55) 00:17:58.713 fused_ordering(56) 00:17:58.713 fused_ordering(57) 00:17:58.713 fused_ordering(58) 00:17:58.713 fused_ordering(59) 00:17:58.713 fused_ordering(60) 00:17:58.713 fused_ordering(61) 00:17:58.713 fused_ordering(62) 00:17:58.713 fused_ordering(63) 00:17:58.713 fused_ordering(64) 00:17:58.713 fused_ordering(65) 00:17:58.713 fused_ordering(66) 00:17:58.713 fused_ordering(67) 00:17:58.713 fused_ordering(68) 00:17:58.713 fused_ordering(69) 00:17:58.713 fused_ordering(70) 00:17:58.713 fused_ordering(71) 00:17:58.713 fused_ordering(72) 00:17:58.713 fused_ordering(73) 00:17:58.713 fused_ordering(74) 00:17:58.713 fused_ordering(75) 00:17:58.713 fused_ordering(76) 00:17:58.713 fused_ordering(77) 00:17:58.713 fused_ordering(78) 00:17:58.713 fused_ordering(79) 00:17:58.713 fused_ordering(80) 00:17:58.713 fused_ordering(81) 00:17:58.713 fused_ordering(82) 00:17:58.713 fused_ordering(83) 00:17:58.713 fused_ordering(84) 00:17:58.713 fused_ordering(85) 00:17:58.713 fused_ordering(86) 00:17:58.713 fused_ordering(87) 00:17:58.713 fused_ordering(88) 00:17:58.713 fused_ordering(89) 00:17:58.713 fused_ordering(90) 00:17:58.713 fused_ordering(91) 00:17:58.713 fused_ordering(92) 00:17:58.713 fused_ordering(93) 00:17:58.713 fused_ordering(94) 00:17:58.713 fused_ordering(95) 00:17:58.713 fused_ordering(96) 00:17:58.713 fused_ordering(97) 00:17:58.713 fused_ordering(98) 00:17:58.713 fused_ordering(99) 00:17:58.713 fused_ordering(100) 00:17:58.713 fused_ordering(101) 00:17:58.713 fused_ordering(102) 00:17:58.713 fused_ordering(103) 00:17:58.713 fused_ordering(104) 00:17:58.713 fused_ordering(105) 00:17:58.713 fused_ordering(106) 00:17:58.713 fused_ordering(107) 00:17:58.713 fused_ordering(108) 00:17:58.713 fused_ordering(109) 00:17:58.713 fused_ordering(110) 00:17:58.713 fused_ordering(111) 00:17:58.713 fused_ordering(112) 00:17:58.713 fused_ordering(113) 00:17:58.713 fused_ordering(114) 00:17:58.713 fused_ordering(115) 00:17:58.713 fused_ordering(116) 00:17:58.713 fused_ordering(117) 00:17:58.713 fused_ordering(118) 00:17:58.713 fused_ordering(119) 00:17:58.713 fused_ordering(120) 00:17:58.713 fused_ordering(121) 00:17:58.713 fused_ordering(122) 00:17:58.713 fused_ordering(123) 00:17:58.713 fused_ordering(124) 00:17:58.713 fused_ordering(125) 00:17:58.713 fused_ordering(126) 00:17:58.713 fused_ordering(127) 00:17:58.713 fused_ordering(128) 00:17:58.713 fused_ordering(129) 00:17:58.713 fused_ordering(130) 00:17:58.713 fused_ordering(131) 00:17:58.713 fused_ordering(132) 00:17:58.713 fused_ordering(133) 00:17:58.713 fused_ordering(134) 00:17:58.713 fused_ordering(135) 00:17:58.713 fused_ordering(136) 00:17:58.713 fused_ordering(137) 00:17:58.713 fused_ordering(138) 00:17:58.713 fused_ordering(139) 00:17:58.713 fused_ordering(140) 00:17:58.713 fused_ordering(141) 00:17:58.713 fused_ordering(142) 00:17:58.713 fused_ordering(143) 00:17:58.713 fused_ordering(144) 00:17:58.713 fused_ordering(145) 00:17:58.713 fused_ordering(146) 00:17:58.713 fused_ordering(147) 00:17:58.713 fused_ordering(148) 00:17:58.713 fused_ordering(149) 00:17:58.713 fused_ordering(150) 00:17:58.713 fused_ordering(151) 00:17:58.713 fused_ordering(152) 00:17:58.713 fused_ordering(153) 00:17:58.713 fused_ordering(154) 00:17:58.713 fused_ordering(155) 00:17:58.713 fused_ordering(156) 00:17:58.713 fused_ordering(157) 00:17:58.713 fused_ordering(158) 00:17:58.713 fused_ordering(159) 00:17:58.713 fused_ordering(160) 00:17:58.713 fused_ordering(161) 00:17:58.713 fused_ordering(162) 00:17:58.713 fused_ordering(163) 00:17:58.713 fused_ordering(164) 00:17:58.713 fused_ordering(165) 00:17:58.713 fused_ordering(166) 00:17:58.713 fused_ordering(167) 00:17:58.713 fused_ordering(168) 00:17:58.713 fused_ordering(169) 00:17:58.713 fused_ordering(170) 00:17:58.713 fused_ordering(171) 00:17:58.713 fused_ordering(172) 00:17:58.713 fused_ordering(173) 00:17:58.713 fused_ordering(174) 00:17:58.713 fused_ordering(175) 00:17:58.713 fused_ordering(176) 00:17:58.713 fused_ordering(177) 00:17:58.713 fused_ordering(178) 00:17:58.713 fused_ordering(179) 00:17:58.713 fused_ordering(180) 00:17:58.713 fused_ordering(181) 00:17:58.713 fused_ordering(182) 00:17:58.713 fused_ordering(183) 00:17:58.713 fused_ordering(184) 00:17:58.713 fused_ordering(185) 00:17:58.713 fused_ordering(186) 00:17:58.713 fused_ordering(187) 00:17:58.713 fused_ordering(188) 00:17:58.713 fused_ordering(189) 00:17:58.713 fused_ordering(190) 00:17:58.713 fused_ordering(191) 00:17:58.713 fused_ordering(192) 00:17:58.713 fused_ordering(193) 00:17:58.713 fused_ordering(194) 00:17:58.713 fused_ordering(195) 00:17:58.713 fused_ordering(196) 00:17:58.713 fused_ordering(197) 00:17:58.713 fused_ordering(198) 00:17:58.713 fused_ordering(199) 00:17:58.713 fused_ordering(200) 00:17:58.713 fused_ordering(201) 00:17:58.713 fused_ordering(202) 00:17:58.713 fused_ordering(203) 00:17:58.713 fused_ordering(204) 00:17:58.713 fused_ordering(205) 00:17:58.972 fused_ordering(206) 00:17:58.972 fused_ordering(207) 00:17:58.972 fused_ordering(208) 00:17:58.972 fused_ordering(209) 00:17:58.972 fused_ordering(210) 00:17:58.972 fused_ordering(211) 00:17:58.972 fused_ordering(212) 00:17:58.972 fused_ordering(213) 00:17:58.972 fused_ordering(214) 00:17:58.972 fused_ordering(215) 00:17:58.972 fused_ordering(216) 00:17:58.972 fused_ordering(217) 00:17:58.972 fused_ordering(218) 00:17:58.972 fused_ordering(219) 00:17:58.972 fused_ordering(220) 00:17:58.972 fused_ordering(221) 00:17:58.972 fused_ordering(222) 00:17:58.972 fused_ordering(223) 00:17:58.972 fused_ordering(224) 00:17:58.972 fused_ordering(225) 00:17:58.972 fused_ordering(226) 00:17:58.972 fused_ordering(227) 00:17:58.972 fused_ordering(228) 00:17:58.972 fused_ordering(229) 00:17:58.972 fused_ordering(230) 00:17:58.972 fused_ordering(231) 00:17:58.972 fused_ordering(232) 00:17:58.972 fused_ordering(233) 00:17:58.972 fused_ordering(234) 00:17:58.972 fused_ordering(235) 00:17:58.972 fused_ordering(236) 00:17:58.972 fused_ordering(237) 00:17:58.972 fused_ordering(238) 00:17:58.972 fused_ordering(239) 00:17:58.972 fused_ordering(240) 00:17:58.972 fused_ordering(241) 00:17:58.972 fused_ordering(242) 00:17:58.972 fused_ordering(243) 00:17:58.972 fused_ordering(244) 00:17:58.972 fused_ordering(245) 00:17:58.972 fused_ordering(246) 00:17:58.972 fused_ordering(247) 00:17:58.972 fused_ordering(248) 00:17:58.972 fused_ordering(249) 00:17:58.972 fused_ordering(250) 00:17:58.972 fused_ordering(251) 00:17:58.972 fused_ordering(252) 00:17:58.972 fused_ordering(253) 00:17:58.972 fused_ordering(254) 00:17:58.972 fused_ordering(255) 00:17:58.972 fused_ordering(256) 00:17:58.972 fused_ordering(257) 00:17:58.972 fused_ordering(258) 00:17:58.972 fused_ordering(259) 00:17:58.972 fused_ordering(260) 00:17:58.972 fused_ordering(261) 00:17:58.972 fused_ordering(262) 00:17:58.972 fused_ordering(263) 00:17:58.972 fused_ordering(264) 00:17:58.972 fused_ordering(265) 00:17:58.972 fused_ordering(266) 00:17:58.972 fused_ordering(267) 00:17:58.972 fused_ordering(268) 00:17:58.972 fused_ordering(269) 00:17:58.972 fused_ordering(270) 00:17:58.972 fused_ordering(271) 00:17:58.972 fused_ordering(272) 00:17:58.972 fused_ordering(273) 00:17:58.972 fused_ordering(274) 00:17:58.972 fused_ordering(275) 00:17:58.972 fused_ordering(276) 00:17:58.972 fused_ordering(277) 00:17:58.972 fused_ordering(278) 00:17:58.972 fused_ordering(279) 00:17:58.972 fused_ordering(280) 00:17:58.972 fused_ordering(281) 00:17:58.972 fused_ordering(282) 00:17:58.972 fused_ordering(283) 00:17:58.972 fused_ordering(284) 00:17:58.972 fused_ordering(285) 00:17:58.972 fused_ordering(286) 00:17:58.972 fused_ordering(287) 00:17:58.972 fused_ordering(288) 00:17:58.972 fused_ordering(289) 00:17:58.972 fused_ordering(290) 00:17:58.972 fused_ordering(291) 00:17:58.972 fused_ordering(292) 00:17:58.972 fused_ordering(293) 00:17:58.972 fused_ordering(294) 00:17:58.972 fused_ordering(295) 00:17:58.972 fused_ordering(296) 00:17:58.972 fused_ordering(297) 00:17:58.972 fused_ordering(298) 00:17:58.972 fused_ordering(299) 00:17:58.972 fused_ordering(300) 00:17:58.972 fused_ordering(301) 00:17:58.972 fused_ordering(302) 00:17:58.972 fused_ordering(303) 00:17:58.972 fused_ordering(304) 00:17:58.972 fused_ordering(305) 00:17:58.972 fused_ordering(306) 00:17:58.972 fused_ordering(307) 00:17:58.972 fused_ordering(308) 00:17:58.972 fused_ordering(309) 00:17:58.972 fused_ordering(310) 00:17:58.972 fused_ordering(311) 00:17:58.972 fused_ordering(312) 00:17:58.972 fused_ordering(313) 00:17:58.972 fused_ordering(314) 00:17:58.972 fused_ordering(315) 00:17:58.972 fused_ordering(316) 00:17:58.972 fused_ordering(317) 00:17:58.972 fused_ordering(318) 00:17:58.972 fused_ordering(319) 00:17:58.972 fused_ordering(320) 00:17:58.972 fused_ordering(321) 00:17:58.972 fused_ordering(322) 00:17:58.972 fused_ordering(323) 00:17:58.972 fused_ordering(324) 00:17:58.972 fused_ordering(325) 00:17:58.972 fused_ordering(326) 00:17:58.972 fused_ordering(327) 00:17:58.972 fused_ordering(328) 00:17:58.972 fused_ordering(329) 00:17:58.972 fused_ordering(330) 00:17:58.972 fused_ordering(331) 00:17:58.972 fused_ordering(332) 00:17:58.972 fused_ordering(333) 00:17:58.972 fused_ordering(334) 00:17:58.972 fused_ordering(335) 00:17:58.972 fused_ordering(336) 00:17:58.972 fused_ordering(337) 00:17:58.972 fused_ordering(338) 00:17:58.972 fused_ordering(339) 00:17:58.972 fused_ordering(340) 00:17:58.972 fused_ordering(341) 00:17:58.972 fused_ordering(342) 00:17:58.972 fused_ordering(343) 00:17:58.972 fused_ordering(344) 00:17:58.972 fused_ordering(345) 00:17:58.972 fused_ordering(346) 00:17:58.972 fused_ordering(347) 00:17:58.972 fused_ordering(348) 00:17:58.972 fused_ordering(349) 00:17:58.972 fused_ordering(350) 00:17:58.972 fused_ordering(351) 00:17:58.972 fused_ordering(352) 00:17:58.972 fused_ordering(353) 00:17:58.972 fused_ordering(354) 00:17:58.972 fused_ordering(355) 00:17:58.972 fused_ordering(356) 00:17:58.972 fused_ordering(357) 00:17:58.972 fused_ordering(358) 00:17:58.972 fused_ordering(359) 00:17:58.972 fused_ordering(360) 00:17:58.972 fused_ordering(361) 00:17:58.972 fused_ordering(362) 00:17:58.972 fused_ordering(363) 00:17:58.972 fused_ordering(364) 00:17:58.972 fused_ordering(365) 00:17:58.972 fused_ordering(366) 00:17:58.972 fused_ordering(367) 00:17:58.972 fused_ordering(368) 00:17:58.972 fused_ordering(369) 00:17:58.972 fused_ordering(370) 00:17:58.972 fused_ordering(371) 00:17:58.972 fused_ordering(372) 00:17:58.972 fused_ordering(373) 00:17:58.972 fused_ordering(374) 00:17:58.972 fused_ordering(375) 00:17:58.972 fused_ordering(376) 00:17:58.972 fused_ordering(377) 00:17:58.972 fused_ordering(378) 00:17:58.972 fused_ordering(379) 00:17:58.972 fused_ordering(380) 00:17:58.972 fused_ordering(381) 00:17:58.972 fused_ordering(382) 00:17:58.972 fused_ordering(383) 00:17:58.972 fused_ordering(384) 00:17:58.972 fused_ordering(385) 00:17:58.972 fused_ordering(386) 00:17:58.972 fused_ordering(387) 00:17:58.972 fused_ordering(388) 00:17:58.972 fused_ordering(389) 00:17:58.972 fused_ordering(390) 00:17:58.972 fused_ordering(391) 00:17:58.973 fused_ordering(392) 00:17:58.973 fused_ordering(393) 00:17:58.973 fused_ordering(394) 00:17:58.973 fused_ordering(395) 00:17:58.973 fused_ordering(396) 00:17:58.973 fused_ordering(397) 00:17:58.973 fused_ordering(398) 00:17:58.973 fused_ordering(399) 00:17:58.973 fused_ordering(400) 00:17:58.973 fused_ordering(401) 00:17:58.973 fused_ordering(402) 00:17:58.973 fused_ordering(403) 00:17:58.973 fused_ordering(404) 00:17:58.973 fused_ordering(405) 00:17:58.973 fused_ordering(406) 00:17:58.973 fused_ordering(407) 00:17:58.973 fused_ordering(408) 00:17:58.973 fused_ordering(409) 00:17:58.973 fused_ordering(410) 00:17:59.231 fused_ordering(411) 00:17:59.231 fused_ordering(412) 00:17:59.231 fused_ordering(413) 00:17:59.231 fused_ordering(414) 00:17:59.231 fused_ordering(415) 00:17:59.231 fused_ordering(416) 00:17:59.231 fused_ordering(417) 00:17:59.231 fused_ordering(418) 00:17:59.231 fused_ordering(419) 00:17:59.231 fused_ordering(420) 00:17:59.231 fused_ordering(421) 00:17:59.231 fused_ordering(422) 00:17:59.231 fused_ordering(423) 00:17:59.231 fused_ordering(424) 00:17:59.231 fused_ordering(425) 00:17:59.231 fused_ordering(426) 00:17:59.231 fused_ordering(427) 00:17:59.231 fused_ordering(428) 00:17:59.231 fused_ordering(429) 00:17:59.231 fused_ordering(430) 00:17:59.231 fused_ordering(431) 00:17:59.231 fused_ordering(432) 00:17:59.231 fused_ordering(433) 00:17:59.231 fused_ordering(434) 00:17:59.231 fused_ordering(435) 00:17:59.231 fused_ordering(436) 00:17:59.231 fused_ordering(437) 00:17:59.231 fused_ordering(438) 00:17:59.231 fused_ordering(439) 00:17:59.231 fused_ordering(440) 00:17:59.231 fused_ordering(441) 00:17:59.231 fused_ordering(442) 00:17:59.231 fused_ordering(443) 00:17:59.231 fused_ordering(444) 00:17:59.231 fused_ordering(445) 00:17:59.231 fused_ordering(446) 00:17:59.231 fused_ordering(447) 00:17:59.231 fused_ordering(448) 00:17:59.231 fused_ordering(449) 00:17:59.231 fused_ordering(450) 00:17:59.231 fused_ordering(451) 00:17:59.231 fused_ordering(452) 00:17:59.231 fused_ordering(453) 00:17:59.231 fused_ordering(454) 00:17:59.231 fused_ordering(455) 00:17:59.231 fused_ordering(456) 00:17:59.231 fused_ordering(457) 00:17:59.231 fused_ordering(458) 00:17:59.231 fused_ordering(459) 00:17:59.231 fused_ordering(460) 00:17:59.231 fused_ordering(461) 00:17:59.231 fused_ordering(462) 00:17:59.231 fused_ordering(463) 00:17:59.231 fused_ordering(464) 00:17:59.231 fused_ordering(465) 00:17:59.231 fused_ordering(466) 00:17:59.231 fused_ordering(467) 00:17:59.231 fused_ordering(468) 00:17:59.231 fused_ordering(469) 00:17:59.231 fused_ordering(470) 00:17:59.231 fused_ordering(471) 00:17:59.231 fused_ordering(472) 00:17:59.231 fused_ordering(473) 00:17:59.231 fused_ordering(474) 00:17:59.231 fused_ordering(475) 00:17:59.231 fused_ordering(476) 00:17:59.231 fused_ordering(477) 00:17:59.231 fused_ordering(478) 00:17:59.231 fused_ordering(479) 00:17:59.231 fused_ordering(480) 00:17:59.232 fused_ordering(481) 00:17:59.232 fused_ordering(482) 00:17:59.232 fused_ordering(483) 00:17:59.232 fused_ordering(484) 00:17:59.232 fused_ordering(485) 00:17:59.232 fused_ordering(486) 00:17:59.232 fused_ordering(487) 00:17:59.232 fused_ordering(488) 00:17:59.232 fused_ordering(489) 00:17:59.232 fused_ordering(490) 00:17:59.232 fused_ordering(491) 00:17:59.232 fused_ordering(492) 00:17:59.232 fused_ordering(493) 00:17:59.232 fused_ordering(494) 00:17:59.232 fused_ordering(495) 00:17:59.232 fused_ordering(496) 00:17:59.232 fused_ordering(497) 00:17:59.232 fused_ordering(498) 00:17:59.232 fused_ordering(499) 00:17:59.232 fused_ordering(500) 00:17:59.232 fused_ordering(501) 00:17:59.232 fused_ordering(502) 00:17:59.232 fused_ordering(503) 00:17:59.232 fused_ordering(504) 00:17:59.232 fused_ordering(505) 00:17:59.232 fused_ordering(506) 00:17:59.232 fused_ordering(507) 00:17:59.232 fused_ordering(508) 00:17:59.232 fused_ordering(509) 00:17:59.232 fused_ordering(510) 00:17:59.232 fused_ordering(511) 00:17:59.232 fused_ordering(512) 00:17:59.232 fused_ordering(513) 00:17:59.232 fused_ordering(514) 00:17:59.232 fused_ordering(515) 00:17:59.232 fused_ordering(516) 00:17:59.232 fused_ordering(517) 00:17:59.232 fused_ordering(518) 00:17:59.232 fused_ordering(519) 00:17:59.232 fused_ordering(520) 00:17:59.232 fused_ordering(521) 00:17:59.232 fused_ordering(522) 00:17:59.232 fused_ordering(523) 00:17:59.232 fused_ordering(524) 00:17:59.232 fused_ordering(525) 00:17:59.232 fused_ordering(526) 00:17:59.232 fused_ordering(527) 00:17:59.232 fused_ordering(528) 00:17:59.232 fused_ordering(529) 00:17:59.232 fused_ordering(530) 00:17:59.232 fused_ordering(531) 00:17:59.232 fused_ordering(532) 00:17:59.232 fused_ordering(533) 00:17:59.232 fused_ordering(534) 00:17:59.232 fused_ordering(535) 00:17:59.232 fused_ordering(536) 00:17:59.232 fused_ordering(537) 00:17:59.232 fused_ordering(538) 00:17:59.232 fused_ordering(539) 00:17:59.232 fused_ordering(540) 00:17:59.232 fused_ordering(541) 00:17:59.232 fused_ordering(542) 00:17:59.232 fused_ordering(543) 00:17:59.232 fused_ordering(544) 00:17:59.232 fused_ordering(545) 00:17:59.232 fused_ordering(546) 00:17:59.232 fused_ordering(547) 00:17:59.232 fused_ordering(548) 00:17:59.232 fused_ordering(549) 00:17:59.232 fused_ordering(550) 00:17:59.232 fused_ordering(551) 00:17:59.232 fused_ordering(552) 00:17:59.232 fused_ordering(553) 00:17:59.232 fused_ordering(554) 00:17:59.232 fused_ordering(555) 00:17:59.232 fused_ordering(556) 00:17:59.232 fused_ordering(557) 00:17:59.232 fused_ordering(558) 00:17:59.232 fused_ordering(559) 00:17:59.232 fused_ordering(560) 00:17:59.232 fused_ordering(561) 00:17:59.232 fused_ordering(562) 00:17:59.232 fused_ordering(563) 00:17:59.232 fused_ordering(564) 00:17:59.232 fused_ordering(565) 00:17:59.232 fused_ordering(566) 00:17:59.232 fused_ordering(567) 00:17:59.232 fused_ordering(568) 00:17:59.232 fused_ordering(569) 00:17:59.232 fused_ordering(570) 00:17:59.232 fused_ordering(571) 00:17:59.232 fused_ordering(572) 00:17:59.232 fused_ordering(573) 00:17:59.232 fused_ordering(574) 00:17:59.232 fused_ordering(575) 00:17:59.232 fused_ordering(576) 00:17:59.232 fused_ordering(577) 00:17:59.232 fused_ordering(578) 00:17:59.232 fused_ordering(579) 00:17:59.232 fused_ordering(580) 00:17:59.232 fused_ordering(581) 00:17:59.232 fused_ordering(582) 00:17:59.232 fused_ordering(583) 00:17:59.232 fused_ordering(584) 00:17:59.232 fused_ordering(585) 00:17:59.232 fused_ordering(586) 00:17:59.232 fused_ordering(587) 00:17:59.232 fused_ordering(588) 00:17:59.232 fused_ordering(589) 00:17:59.232 fused_ordering(590) 00:17:59.232 fused_ordering(591) 00:17:59.232 fused_ordering(592) 00:17:59.232 fused_ordering(593) 00:17:59.232 fused_ordering(594) 00:17:59.232 fused_ordering(595) 00:17:59.232 fused_ordering(596) 00:17:59.232 fused_ordering(597) 00:17:59.232 fused_ordering(598) 00:17:59.232 fused_ordering(599) 00:17:59.232 fused_ordering(600) 00:17:59.232 fused_ordering(601) 00:17:59.232 fused_ordering(602) 00:17:59.232 fused_ordering(603) 00:17:59.232 fused_ordering(604) 00:17:59.232 fused_ordering(605) 00:17:59.232 fused_ordering(606) 00:17:59.232 fused_ordering(607) 00:17:59.232 fused_ordering(608) 00:17:59.232 fused_ordering(609) 00:17:59.232 fused_ordering(610) 00:17:59.232 fused_ordering(611) 00:17:59.232 fused_ordering(612) 00:17:59.232 fused_ordering(613) 00:17:59.232 fused_ordering(614) 00:17:59.232 fused_ordering(615) 00:17:59.799 fused_ordering(616) 00:17:59.799 fused_ordering(617) 00:17:59.799 fused_ordering(618) 00:17:59.799 fused_ordering(619) 00:17:59.799 fused_ordering(620) 00:17:59.799 fused_ordering(621) 00:17:59.799 fused_ordering(622) 00:17:59.799 fused_ordering(623) 00:17:59.799 fused_ordering(624) 00:17:59.799 fused_ordering(625) 00:17:59.799 fused_ordering(626) 00:17:59.799 fused_ordering(627) 00:17:59.799 fused_ordering(628) 00:17:59.799 fused_ordering(629) 00:17:59.799 fused_ordering(630) 00:17:59.799 fused_ordering(631) 00:17:59.799 fused_ordering(632) 00:17:59.799 fused_ordering(633) 00:17:59.799 fused_ordering(634) 00:17:59.799 fused_ordering(635) 00:17:59.799 fused_ordering(636) 00:17:59.799 fused_ordering(637) 00:17:59.799 fused_ordering(638) 00:17:59.799 fused_ordering(639) 00:17:59.799 fused_ordering(640) 00:17:59.799 fused_ordering(641) 00:17:59.799 fused_ordering(642) 00:17:59.799 fused_ordering(643) 00:17:59.799 fused_ordering(644) 00:17:59.799 fused_ordering(645) 00:17:59.799 fused_ordering(646) 00:17:59.799 fused_ordering(647) 00:17:59.799 fused_ordering(648) 00:17:59.799 fused_ordering(649) 00:17:59.799 fused_ordering(650) 00:17:59.799 fused_ordering(651) 00:17:59.799 fused_ordering(652) 00:17:59.799 fused_ordering(653) 00:17:59.799 fused_ordering(654) 00:17:59.799 fused_ordering(655) 00:17:59.799 fused_ordering(656) 00:17:59.799 fused_ordering(657) 00:17:59.799 fused_ordering(658) 00:17:59.799 fused_ordering(659) 00:17:59.799 fused_ordering(660) 00:17:59.799 fused_ordering(661) 00:17:59.799 fused_ordering(662) 00:17:59.799 fused_ordering(663) 00:17:59.799 fused_ordering(664) 00:17:59.799 fused_ordering(665) 00:17:59.799 fused_ordering(666) 00:17:59.799 fused_ordering(667) 00:17:59.799 fused_ordering(668) 00:17:59.799 fused_ordering(669) 00:17:59.799 fused_ordering(670) 00:17:59.799 fused_ordering(671) 00:17:59.799 fused_ordering(672) 00:17:59.799 fused_ordering(673) 00:17:59.799 fused_ordering(674) 00:17:59.799 fused_ordering(675) 00:17:59.799 fused_ordering(676) 00:17:59.799 fused_ordering(677) 00:17:59.799 fused_ordering(678) 00:17:59.799 fused_ordering(679) 00:17:59.799 fused_ordering(680) 00:17:59.799 fused_ordering(681) 00:17:59.799 fused_ordering(682) 00:17:59.799 fused_ordering(683) 00:17:59.799 fused_ordering(684) 00:17:59.799 fused_ordering(685) 00:17:59.799 fused_ordering(686) 00:17:59.799 fused_ordering(687) 00:17:59.799 fused_ordering(688) 00:17:59.799 fused_ordering(689) 00:17:59.799 fused_ordering(690) 00:17:59.799 fused_ordering(691) 00:17:59.799 fused_ordering(692) 00:17:59.799 fused_ordering(693) 00:17:59.799 fused_ordering(694) 00:17:59.799 fused_ordering(695) 00:17:59.799 fused_ordering(696) 00:17:59.799 fused_ordering(697) 00:17:59.799 fused_ordering(698) 00:17:59.799 fused_ordering(699) 00:17:59.799 fused_ordering(700) 00:17:59.799 fused_ordering(701) 00:17:59.799 fused_ordering(702) 00:17:59.799 fused_ordering(703) 00:17:59.799 fused_ordering(704) 00:17:59.799 fused_ordering(705) 00:17:59.799 fused_ordering(706) 00:17:59.799 fused_ordering(707) 00:17:59.799 fused_ordering(708) 00:17:59.799 fused_ordering(709) 00:17:59.799 fused_ordering(710) 00:17:59.799 fused_ordering(711) 00:17:59.799 fused_ordering(712) 00:17:59.799 fused_ordering(713) 00:17:59.799 fused_ordering(714) 00:17:59.799 fused_ordering(715) 00:17:59.799 fused_ordering(716) 00:17:59.799 fused_ordering(717) 00:17:59.799 fused_ordering(718) 00:17:59.799 fused_ordering(719) 00:17:59.799 fused_ordering(720) 00:17:59.800 fused_ordering(721) 00:17:59.800 fused_ordering(722) 00:17:59.800 fused_ordering(723) 00:17:59.800 fused_ordering(724) 00:17:59.800 fused_ordering(725) 00:17:59.800 fused_ordering(726) 00:17:59.800 fused_ordering(727) 00:17:59.800 fused_ordering(728) 00:17:59.800 fused_ordering(729) 00:17:59.800 fused_ordering(730) 00:17:59.800 fused_ordering(731) 00:17:59.800 fused_ordering(732) 00:17:59.800 fused_ordering(733) 00:17:59.800 fused_ordering(734) 00:17:59.800 fused_ordering(735) 00:17:59.800 fused_ordering(736) 00:17:59.800 fused_ordering(737) 00:17:59.800 fused_ordering(738) 00:17:59.800 fused_ordering(739) 00:17:59.800 fused_ordering(740) 00:17:59.800 fused_ordering(741) 00:17:59.800 fused_ordering(742) 00:17:59.800 fused_ordering(743) 00:17:59.800 fused_ordering(744) 00:17:59.800 fused_ordering(745) 00:17:59.800 fused_ordering(746) 00:17:59.800 fused_ordering(747) 00:17:59.800 fused_ordering(748) 00:17:59.800 fused_ordering(749) 00:17:59.800 fused_ordering(750) 00:17:59.800 fused_ordering(751) 00:17:59.800 fused_ordering(752) 00:17:59.800 fused_ordering(753) 00:17:59.800 fused_ordering(754) 00:17:59.800 fused_ordering(755) 00:17:59.800 fused_ordering(756) 00:17:59.800 fused_ordering(757) 00:17:59.800 fused_ordering(758) 00:17:59.800 fused_ordering(759) 00:17:59.800 fused_ordering(760) 00:17:59.800 fused_ordering(761) 00:17:59.800 fused_ordering(762) 00:17:59.800 fused_ordering(763) 00:17:59.800 fused_ordering(764) 00:17:59.800 fused_ordering(765) 00:17:59.800 fused_ordering(766) 00:17:59.800 fused_ordering(767) 00:17:59.800 fused_ordering(768) 00:17:59.800 fused_ordering(769) 00:17:59.800 fused_ordering(770) 00:17:59.800 fused_ordering(771) 00:17:59.800 fused_ordering(772) 00:17:59.800 fused_ordering(773) 00:17:59.800 fused_ordering(774) 00:17:59.800 fused_ordering(775) 00:17:59.800 fused_ordering(776) 00:17:59.800 fused_ordering(777) 00:17:59.800 fused_ordering(778) 00:17:59.800 fused_ordering(779) 00:17:59.800 fused_ordering(780) 00:17:59.800 fused_ordering(781) 00:17:59.800 fused_ordering(782) 00:17:59.800 fused_ordering(783) 00:17:59.800 fused_ordering(784) 00:17:59.800 fused_ordering(785) 00:17:59.800 fused_ordering(786) 00:17:59.800 fused_ordering(787) 00:17:59.800 fused_ordering(788) 00:17:59.800 fused_ordering(789) 00:17:59.800 fused_ordering(790) 00:17:59.800 fused_ordering(791) 00:17:59.800 fused_ordering(792) 00:17:59.800 fused_ordering(793) 00:17:59.800 fused_ordering(794) 00:17:59.800 fused_ordering(795) 00:17:59.800 fused_ordering(796) 00:17:59.800 fused_ordering(797) 00:17:59.800 fused_ordering(798) 00:17:59.800 fused_ordering(799) 00:17:59.800 fused_ordering(800) 00:17:59.800 fused_ordering(801) 00:17:59.800 fused_ordering(802) 00:17:59.800 fused_ordering(803) 00:17:59.800 fused_ordering(804) 00:17:59.800 fused_ordering(805) 00:17:59.800 fused_ordering(806) 00:17:59.800 fused_ordering(807) 00:17:59.800 fused_ordering(808) 00:17:59.800 fused_ordering(809) 00:17:59.800 fused_ordering(810) 00:17:59.800 fused_ordering(811) 00:17:59.800 fused_ordering(812) 00:17:59.800 fused_ordering(813) 00:17:59.800 fused_ordering(814) 00:17:59.800 fused_ordering(815) 00:17:59.800 fused_ordering(816) 00:17:59.800 fused_ordering(817) 00:17:59.800 fused_ordering(818) 00:17:59.800 fused_ordering(819) 00:17:59.800 fused_ordering(820) 00:18:00.370 fused_ordering(821) 00:18:00.370 fused_ordering(822) 00:18:00.370 fused_ordering(823) 00:18:00.370 fused_ordering(824) 00:18:00.370 fused_ordering(825) 00:18:00.370 fused_ordering(826) 00:18:00.370 fused_ordering(827) 00:18:00.370 fused_ordering(828) 00:18:00.370 fused_ordering(829) 00:18:00.370 fused_ordering(830) 00:18:00.370 fused_ordering(831) 00:18:00.370 fused_ordering(832) 00:18:00.370 fused_ordering(833) 00:18:00.370 fused_ordering(834) 00:18:00.370 fused_ordering(835) 00:18:00.370 fused_ordering(836) 00:18:00.370 fused_ordering(837) 00:18:00.370 fused_ordering(838) 00:18:00.370 fused_ordering(839) 00:18:00.370 fused_ordering(840) 00:18:00.370 fused_ordering(841) 00:18:00.370 fused_ordering(842) 00:18:00.370 fused_ordering(843) 00:18:00.370 fused_ordering(844) 00:18:00.370 fused_ordering(845) 00:18:00.370 fused_ordering(846) 00:18:00.370 fused_ordering(847) 00:18:00.370 fused_ordering(848) 00:18:00.370 fused_ordering(849) 00:18:00.370 fused_ordering(850) 00:18:00.370 fused_ordering(851) 00:18:00.370 fused_ordering(852) 00:18:00.370 fused_ordering(853) 00:18:00.370 fused_ordering(854) 00:18:00.370 fused_ordering(855) 00:18:00.370 fused_ordering(856) 00:18:00.370 fused_ordering(857) 00:18:00.370 fused_ordering(858) 00:18:00.370 fused_ordering(859) 00:18:00.370 fused_ordering(860) 00:18:00.370 fused_ordering(861) 00:18:00.370 fused_ordering(862) 00:18:00.370 fused_ordering(863) 00:18:00.370 fused_ordering(864) 00:18:00.370 fused_ordering(865) 00:18:00.370 fused_ordering(866) 00:18:00.370 fused_ordering(867) 00:18:00.370 fused_ordering(868) 00:18:00.370 fused_ordering(869) 00:18:00.370 fused_ordering(870) 00:18:00.370 fused_ordering(871) 00:18:00.370 fused_ordering(872) 00:18:00.370 fused_ordering(873) 00:18:00.370 fused_ordering(874) 00:18:00.370 fused_ordering(875) 00:18:00.370 fused_ordering(876) 00:18:00.370 fused_ordering(877) 00:18:00.370 fused_ordering(878) 00:18:00.370 fused_ordering(879) 00:18:00.370 fused_ordering(880) 00:18:00.370 fused_ordering(881) 00:18:00.370 fused_ordering(882) 00:18:00.370 fused_ordering(883) 00:18:00.370 fused_ordering(884) 00:18:00.370 fused_ordering(885) 00:18:00.370 fused_ordering(886) 00:18:00.370 fused_ordering(887) 00:18:00.370 fused_ordering(888) 00:18:00.370 fused_ordering(889) 00:18:00.370 fused_ordering(890) 00:18:00.370 fused_ordering(891) 00:18:00.370 fused_ordering(892) 00:18:00.370 fused_ordering(893) 00:18:00.370 fused_ordering(894) 00:18:00.370 fused_ordering(895) 00:18:00.370 fused_ordering(896) 00:18:00.370 fused_ordering(897) 00:18:00.370 fused_ordering(898) 00:18:00.370 fused_ordering(899) 00:18:00.370 fused_ordering(900) 00:18:00.370 fused_ordering(901) 00:18:00.370 fused_ordering(902) 00:18:00.370 fused_ordering(903) 00:18:00.370 fused_ordering(904) 00:18:00.370 fused_ordering(905) 00:18:00.370 fused_ordering(906) 00:18:00.370 fused_ordering(907) 00:18:00.370 fused_ordering(908) 00:18:00.370 fused_ordering(909) 00:18:00.370 fused_ordering(910) 00:18:00.370 fused_ordering(911) 00:18:00.370 fused_ordering(912) 00:18:00.370 fused_ordering(913) 00:18:00.370 fused_ordering(914) 00:18:00.370 fused_ordering(915) 00:18:00.370 fused_ordering(916) 00:18:00.370 fused_ordering(917) 00:18:00.370 fused_ordering(918) 00:18:00.370 fused_ordering(919) 00:18:00.370 fused_ordering(920) 00:18:00.370 fused_ordering(921) 00:18:00.370 fused_ordering(922) 00:18:00.370 fused_ordering(923) 00:18:00.370 fused_ordering(924) 00:18:00.370 fused_ordering(925) 00:18:00.370 fused_ordering(926) 00:18:00.370 fused_ordering(927) 00:18:00.370 fused_ordering(928) 00:18:00.370 fused_ordering(929) 00:18:00.370 fused_ordering(930) 00:18:00.370 fused_ordering(931) 00:18:00.370 fused_ordering(932) 00:18:00.370 fused_ordering(933) 00:18:00.370 fused_ordering(934) 00:18:00.370 fused_ordering(935) 00:18:00.370 fused_ordering(936) 00:18:00.370 fused_ordering(937) 00:18:00.370 fused_ordering(938) 00:18:00.370 fused_ordering(939) 00:18:00.370 fused_ordering(940) 00:18:00.370 fused_ordering(941) 00:18:00.370 fused_ordering(942) 00:18:00.370 fused_ordering(943) 00:18:00.370 fused_ordering(944) 00:18:00.370 fused_ordering(945) 00:18:00.370 fused_ordering(946) 00:18:00.370 fused_ordering(947) 00:18:00.370 fused_ordering(948) 00:18:00.370 fused_ordering(949) 00:18:00.370 fused_ordering(950) 00:18:00.370 fused_ordering(951) 00:18:00.370 fused_ordering(952) 00:18:00.370 fused_ordering(953) 00:18:00.370 fused_ordering(954) 00:18:00.370 fused_ordering(955) 00:18:00.370 fused_ordering(956) 00:18:00.370 fused_ordering(957) 00:18:00.370 fused_ordering(958) 00:18:00.370 fused_ordering(959) 00:18:00.370 fused_ordering(960) 00:18:00.370 fused_ordering(961) 00:18:00.370 fused_ordering(962) 00:18:00.370 fused_ordering(963) 00:18:00.370 fused_ordering(964) 00:18:00.370 fused_ordering(965) 00:18:00.370 fused_ordering(966) 00:18:00.370 fused_ordering(967) 00:18:00.370 fused_ordering(968) 00:18:00.370 fused_ordering(969) 00:18:00.370 fused_ordering(970) 00:18:00.370 fused_ordering(971) 00:18:00.370 fused_ordering(972) 00:18:00.370 fused_ordering(973) 00:18:00.370 fused_ordering(974) 00:18:00.370 fused_ordering(975) 00:18:00.370 fused_ordering(976) 00:18:00.370 fused_ordering(977) 00:18:00.370 fused_ordering(978) 00:18:00.370 fused_ordering(979) 00:18:00.370 fused_ordering(980) 00:18:00.370 fused_ordering(981) 00:18:00.370 fused_ordering(982) 00:18:00.370 fused_ordering(983) 00:18:00.370 fused_ordering(984) 00:18:00.370 fused_ordering(985) 00:18:00.370 fused_ordering(986) 00:18:00.370 fused_ordering(987) 00:18:00.370 fused_ordering(988) 00:18:00.370 fused_ordering(989) 00:18:00.370 fused_ordering(990) 00:18:00.370 fused_ordering(991) 00:18:00.370 fused_ordering(992) 00:18:00.370 fused_ordering(993) 00:18:00.370 fused_ordering(994) 00:18:00.370 fused_ordering(995) 00:18:00.370 fused_ordering(996) 00:18:00.370 fused_ordering(997) 00:18:00.370 fused_ordering(998) 00:18:00.370 fused_ordering(999) 00:18:00.370 fused_ordering(1000) 00:18:00.370 fused_ordering(1001) 00:18:00.370 fused_ordering(1002) 00:18:00.370 fused_ordering(1003) 00:18:00.370 fused_ordering(1004) 00:18:00.370 fused_ordering(1005) 00:18:00.370 fused_ordering(1006) 00:18:00.370 fused_ordering(1007) 00:18:00.370 fused_ordering(1008) 00:18:00.370 fused_ordering(1009) 00:18:00.370 fused_ordering(1010) 00:18:00.370 fused_ordering(1011) 00:18:00.370 fused_ordering(1012) 00:18:00.370 fused_ordering(1013) 00:18:00.371 fused_ordering(1014) 00:18:00.371 fused_ordering(1015) 00:18:00.371 fused_ordering(1016) 00:18:00.371 fused_ordering(1017) 00:18:00.371 fused_ordering(1018) 00:18:00.371 fused_ordering(1019) 00:18:00.371 fused_ordering(1020) 00:18:00.371 fused_ordering(1021) 00:18:00.371 fused_ordering(1022) 00:18:00.371 fused_ordering(1023) 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:00.371 rmmod nvme_tcp 00:18:00.371 rmmod nvme_fabrics 00:18:00.371 rmmod nvme_keyring 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2190452 ']' 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2190452 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2190452 ']' 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2190452 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.371 16:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190452 00:18:00.371 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.371 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.371 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190452' 00:18:00.371 killing process with pid 2190452 00:18:00.371 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2190452 00:18:00.371 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2190452 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:00.631 16:44:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:02.547 00:18:02.547 real 0m10.572s 00:18:02.547 user 0m5.344s 00:18:02.547 sys 0m5.490s 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:02.547 ************************************ 00:18:02.547 END TEST nvmf_fused_ordering 00:18:02.547 ************************************ 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:02.547 ************************************ 00:18:02.547 START TEST nvmf_ns_masking 00:18:02.547 ************************************ 00:18:02.547 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:02.810 * Looking for test storage... 00:18:02.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.810 --rc genhtml_branch_coverage=1 00:18:02.810 --rc genhtml_function_coverage=1 00:18:02.810 --rc genhtml_legend=1 00:18:02.810 --rc geninfo_all_blocks=1 00:18:02.810 --rc geninfo_unexecuted_blocks=1 00:18:02.810 00:18:02.810 ' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.810 --rc genhtml_branch_coverage=1 00:18:02.810 --rc genhtml_function_coverage=1 00:18:02.810 --rc genhtml_legend=1 00:18:02.810 --rc geninfo_all_blocks=1 00:18:02.810 --rc geninfo_unexecuted_blocks=1 00:18:02.810 00:18:02.810 ' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.810 --rc genhtml_branch_coverage=1 00:18:02.810 --rc genhtml_function_coverage=1 00:18:02.810 --rc genhtml_legend=1 00:18:02.810 --rc geninfo_all_blocks=1 00:18:02.810 --rc geninfo_unexecuted_blocks=1 00:18:02.810 00:18:02.810 ' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.810 --rc genhtml_branch_coverage=1 00:18:02.810 --rc genhtml_function_coverage=1 00:18:02.810 --rc genhtml_legend=1 00:18:02.810 --rc geninfo_all_blocks=1 00:18:02.810 --rc geninfo_unexecuted_blocks=1 00:18:02.810 00:18:02.810 ' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.810 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=19c9f580-fe38-4aff-aee8-8110376fb535 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a779fadf-699d-46f9-820a-bb54029103a0 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b102f256-052b-401b-bcdb-0c8790f3cf94 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:02.811 16:44:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:08.089 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:08.089 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.089 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:08.090 Found net devices under 0000:31:00.0: cvl_0_0 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:08.090 Found net devices under 0000:31:00.1: cvl_0_1 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.090 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:08.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:18:08.350 00:18:08.350 --- 10.0.0.2 ping statistics --- 00:18:08.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.350 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:18:08.350 00:18:08.350 --- 10.0.0.1 ping statistics --- 00:18:08.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.350 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.350 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2195459 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2195459 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2195459 ']' 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.351 16:44:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:08.351 [2024-12-06 16:44:56.976175] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:08.351 [2024-12-06 16:44:56.976244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.612 [2024-12-06 16:44:57.066647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.612 [2024-12-06 16:44:57.085493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.612 [2024-12-06 16:44:57.085527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.612 [2024-12-06 16:44:57.085535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.612 [2024-12-06 16:44:57.085542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.612 [2024-12-06 16:44:57.085548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.612 [2024-12-06 16:44:57.086132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.612 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.612 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:08.612 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.612 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.612 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:08.612 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.612 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:08.873 [2024-12-06 16:44:57.328578] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.873 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:08.873 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:08.873 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:08.873 Malloc1 00:18:08.873 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:09.133 Malloc2 00:18:09.133 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:09.393 16:44:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:09.393 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.652 [2024-12-06 16:44:58.150143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.652 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:09.652 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b102f256-052b-401b-bcdb-0c8790f3cf94 -a 10.0.0.2 -s 4420 -i 4 00:18:09.911 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:09.911 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:09.912 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.912 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:09.912 16:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.848 [ 0]:0x1 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.848 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03e111115034bbfbe8e7192d83378e2 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03e111115034bbfbe8e7192d83378e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:12.108 [ 0]:0x1 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03e111115034bbfbe8e7192d83378e2 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03e111115034bbfbe8e7192d83378e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:12.108 [ 1]:0x2 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c1baf35e598424dabd3bae21921d6e7 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c1baf35e598424dabd3bae21921d6e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:12.108 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.367 16:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.367 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:12.627 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:12.627 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b102f256-052b-401b-bcdb-0c8790f3cf94 -a 10.0.0.2 -s 4420 -i 4 00:18:12.885 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:12.885 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:12.885 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.885 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:12.885 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:12.885 16:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.794 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:15.054 [ 0]:0x2 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c1baf35e598424dabd3bae21921d6e7 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c1baf35e598424dabd3bae21921d6e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:15.054 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.315 [ 0]:0x1 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03e111115034bbfbe8e7192d83378e2 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03e111115034bbfbe8e7192d83378e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:15.315 [ 1]:0x2 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c1baf35e598424dabd3bae21921d6e7 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c1baf35e598424dabd3bae21921d6e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:15.315 16:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:15.575 [ 0]:0x2 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c1baf35e598424dabd3bae21921d6e7 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c1baf35e598424dabd3bae21921d6e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:15.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:15.575 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b102f256-052b-401b-bcdb-0c8790f3cf94 -a 10.0.0.2 -s 4420 -i 4 00:18:15.836 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:15.836 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:15.836 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.836 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:15.836 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:15.836 16:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:18.371 [ 0]:0x1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a03e111115034bbfbe8e7192d83378e2 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a03e111115034bbfbe8e7192d83378e2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:18.371 [ 1]:0x2 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c1baf35e598424dabd3bae21921d6e7 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c1baf35e598424dabd3bae21921d6e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:18.371 [ 0]:0x2 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c1baf35e598424dabd3bae21921d6e7 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c1baf35e598424dabd3bae21921d6e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:18.371 16:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:18.630 [2024-12-06 16:45:07.098627] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:18.630 request: 00:18:18.630 { 00:18:18.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.630 "nsid": 2, 00:18:18.630 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.630 "method": "nvmf_ns_remove_host", 00:18:18.630 "req_id": 1 00:18:18.630 } 00:18:18.630 Got JSON-RPC error response 00:18:18.630 response: 00:18:18.630 { 00:18:18.630 "code": -32602, 00:18:18.630 "message": "Invalid parameters" 00:18:18.630 } 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:18.630 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:18.631 [ 0]:0x2 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9c1baf35e598424dabd3bae21921d6e7 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9c1baf35e598424dabd3bae21921d6e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2198055 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2198055 /var/tmp/host.sock 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2198055 ']' 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:18.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.631 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.631 [2024-12-06 16:45:07.268502] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:18.631 [2024-12-06 16:45:07.268558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198055 ] 00:18:18.889 [2024-12-06 16:45:07.349657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.889 [2024-12-06 16:45:07.367914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.889 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.889 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:18.889 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.147 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:19.147 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 19c9f580-fe38-4aff-aee8-8110376fb535 00:18:19.147 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:19.405 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 19C9F580FE384AFFAEE88110376FB535 -i 00:18:19.406 16:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a779fadf-699d-46f9-820a-bb54029103a0 00:18:19.406 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:19.406 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A779FADF699D46F9820ABB54029103A0 -i 00:18:19.665 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:19.665 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:19.925 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:19.925 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:20.186 nvme0n1 00:18:20.186 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:20.186 16:45:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:20.446 nvme1n2 00:18:20.446 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:20.446 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:20.446 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:20.446 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:20.446 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 19c9f580-fe38-4aff-aee8-8110376fb535 == \1\9\c\9\f\5\8\0\-\f\e\3\8\-\4\a\f\f\-\a\e\e\8\-\8\1\1\0\3\7\6\f\b\5\3\5 ]] 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:20.706 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:20.966 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a779fadf-699d-46f9-820a-bb54029103a0 == \a\7\7\9\f\a\d\f\-\6\9\9\d\-\4\6\f\9\-\8\2\0\a\-\b\b\5\4\0\2\9\1\0\3\a\0 ]] 00:18:20.967 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 19c9f580-fe38-4aff-aee8-8110376fb535 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 19C9F580FE384AFFAEE88110376FB535 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 19C9F580FE384AFFAEE88110376FB535 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:21.226 16:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 19C9F580FE384AFFAEE88110376FB535 00:18:21.484 [2024-12-06 16:45:10.014287] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:21.484 [2024-12-06 16:45:10.014316] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:21.484 [2024-12-06 16:45:10.014323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.484 request: 00:18:21.484 { 00:18:21.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:21.484 "namespace": { 00:18:21.484 "bdev_name": "invalid", 00:18:21.484 "nsid": 1, 00:18:21.484 "nguid": "19C9F580FE384AFFAEE88110376FB535", 00:18:21.484 "no_auto_visible": false, 00:18:21.484 "hide_metadata": false 00:18:21.484 }, 00:18:21.485 "method": "nvmf_subsystem_add_ns", 00:18:21.485 "req_id": 1 00:18:21.485 } 00:18:21.485 Got JSON-RPC error response 00:18:21.485 response: 00:18:21.485 { 00:18:21.485 "code": -32602, 00:18:21.485 "message": "Invalid parameters" 00:18:21.485 } 00:18:21.485 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:21.485 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.485 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.485 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.485 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 19c9f580-fe38-4aff-aee8-8110376fb535 00:18:21.485 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:21.485 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 19C9F580FE384AFFAEE88110376FB535 -i 00:18:21.744 16:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:23.648 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:23.648 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:23.648 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2198055 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2198055 ']' 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2198055 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2198055 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2198055' 00:18:23.906 killing process with pid 2198055 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2198055 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2198055 00:18:23.906 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:24.165 rmmod nvme_tcp 00:18:24.165 rmmod nvme_fabrics 00:18:24.165 rmmod nvme_keyring 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2195459 ']' 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2195459 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2195459 ']' 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2195459 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.165 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2195459 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2195459' 00:18:24.423 killing process with pid 2195459 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2195459 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2195459 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.423 16:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:26.960 00:18:26.960 real 0m23.807s 00:18:26.960 user 0m26.855s 00:18:26.960 sys 0m6.315s 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:26.960 ************************************ 00:18:26.960 END TEST nvmf_ns_masking 00:18:26.960 ************************************ 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.960 ************************************ 00:18:26.960 START TEST nvmf_nvme_cli 00:18:26.960 ************************************ 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:26.960 * Looking for test storage... 00:18:26.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.960 --rc genhtml_branch_coverage=1 00:18:26.960 --rc genhtml_function_coverage=1 00:18:26.960 --rc genhtml_legend=1 00:18:26.960 --rc geninfo_all_blocks=1 00:18:26.960 --rc geninfo_unexecuted_blocks=1 00:18:26.960 00:18:26.960 ' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.960 --rc genhtml_branch_coverage=1 00:18:26.960 --rc genhtml_function_coverage=1 00:18:26.960 --rc genhtml_legend=1 00:18:26.960 --rc geninfo_all_blocks=1 00:18:26.960 --rc geninfo_unexecuted_blocks=1 00:18:26.960 00:18:26.960 ' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.960 --rc genhtml_branch_coverage=1 00:18:26.960 --rc genhtml_function_coverage=1 00:18:26.960 --rc genhtml_legend=1 00:18:26.960 --rc geninfo_all_blocks=1 00:18:26.960 --rc geninfo_unexecuted_blocks=1 00:18:26.960 00:18:26.960 ' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:26.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.960 --rc genhtml_branch_coverage=1 00:18:26.960 --rc genhtml_function_coverage=1 00:18:26.960 --rc genhtml_legend=1 00:18:26.960 --rc geninfo_all_blocks=1 00:18:26.960 --rc geninfo_unexecuted_blocks=1 00:18:26.960 00:18:26.960 ' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:26.960 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.961 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.961 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.961 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:26.961 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:26.961 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:26.961 16:45:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:32.242 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:32.242 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.242 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:32.243 Found net devices under 0000:31:00.0: cvl_0_0 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:32.243 Found net devices under 0000:31:00.1: cvl_0_1 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:32.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:18:32.243 00:18:32.243 --- 10.0.0.2 ping statistics --- 00:18:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.243 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:18:32.243 00:18:32.243 --- 10.0.0.1 ping statistics --- 00:18:32.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.243 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2203949 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2203949 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2203949 ']' 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.243 16:45:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:32.243 [2024-12-06 16:45:20.643950] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:32.243 [2024-12-06 16:45:20.644010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.243 [2024-12-06 16:45:20.736091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:32.243 [2024-12-06 16:45:20.766360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.243 [2024-12-06 16:45:20.766410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.243 [2024-12-06 16:45:20.766419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.243 [2024-12-06 16:45:20.766426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.243 [2024-12-06 16:45:20.766432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.243 [2024-12-06 16:45:20.768350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.243 [2024-12-06 16:45:20.768521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.243 [2024-12-06 16:45:20.768688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.243 [2024-12-06 16:45:20.768689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.811 [2024-12-06 16:45:21.462488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.811 Malloc0 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.811 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 Malloc1 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 [2024-12-06 16:45:21.541418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:18:33.071 00:18:33.071 Discovery Log Number of Records 2, Generation counter 2 00:18:33.071 =====Discovery Log Entry 0====== 00:18:33.071 trtype: tcp 00:18:33.071 adrfam: ipv4 00:18:33.071 subtype: current discovery subsystem 00:18:33.071 treq: not required 00:18:33.071 portid: 0 00:18:33.071 trsvcid: 4420 00:18:33.071 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:33.071 traddr: 10.0.0.2 00:18:33.071 eflags: explicit discovery connections, duplicate discovery information 00:18:33.071 sectype: none 00:18:33.071 =====Discovery Log Entry 1====== 00:18:33.071 trtype: tcp 00:18:33.071 adrfam: ipv4 00:18:33.071 subtype: nvme subsystem 00:18:33.071 treq: not required 00:18:33.071 portid: 0 00:18:33.071 trsvcid: 4420 00:18:33.071 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:33.071 traddr: 10.0.0.2 00:18:33.071 eflags: none 00:18:33.071 sectype: none 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:33.071 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:33.072 16:45:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.598 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:34.598 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:34.598 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.598 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:34.598 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:34.598 16:45:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:36.537 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:36.537 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:36.537 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.537 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:36.797 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.797 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:36.797 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:36.797 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:36.798 /dev/nvme0n2 ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:36.798 rmmod nvme_tcp 00:18:36.798 rmmod nvme_fabrics 00:18:36.798 rmmod nvme_keyring 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2203949 ']' 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2203949 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2203949 ']' 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2203949 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2203949 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2203949' 00:18:36.798 killing process with pid 2203949 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2203949 00:18:36.798 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2203949 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.058 16:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.966 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.227 00:18:39.227 real 0m12.573s 00:18:39.227 user 0m20.706s 00:18:39.227 sys 0m4.677s 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:39.227 ************************************ 00:18:39.227 END TEST nvmf_nvme_cli 00:18:39.227 ************************************ 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.227 ************************************ 00:18:39.227 START TEST nvmf_vfio_user 00:18:39.227 ************************************ 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:39.227 * Looking for test storage... 00:18:39.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.227 --rc genhtml_branch_coverage=1 00:18:39.227 --rc genhtml_function_coverage=1 00:18:39.227 --rc genhtml_legend=1 00:18:39.227 --rc geninfo_all_blocks=1 00:18:39.227 --rc geninfo_unexecuted_blocks=1 00:18:39.227 00:18:39.227 ' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.227 --rc genhtml_branch_coverage=1 00:18:39.227 --rc genhtml_function_coverage=1 00:18:39.227 --rc genhtml_legend=1 00:18:39.227 --rc geninfo_all_blocks=1 00:18:39.227 --rc geninfo_unexecuted_blocks=1 00:18:39.227 00:18:39.227 ' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.227 --rc genhtml_branch_coverage=1 00:18:39.227 --rc genhtml_function_coverage=1 00:18:39.227 --rc genhtml_legend=1 00:18:39.227 --rc geninfo_all_blocks=1 00:18:39.227 --rc geninfo_unexecuted_blocks=1 00:18:39.227 00:18:39.227 ' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:39.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.227 --rc genhtml_branch_coverage=1 00:18:39.227 --rc genhtml_function_coverage=1 00:18:39.227 --rc genhtml_legend=1 00:18:39.227 --rc geninfo_all_blocks=1 00:18:39.227 --rc geninfo_unexecuted_blocks=1 00:18:39.227 00:18:39.227 ' 00:18:39.227 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2205709 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2205709' 00:18:39.228 Process pid: 2205709 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2205709 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2205709 ']' 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:39.228 16:45:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:39.228 [2024-12-06 16:45:27.903759] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:39.228 [2024-12-06 16:45:27.903829] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.489 [2024-12-06 16:45:27.972031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.489 [2024-12-06 16:45:27.988544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.489 [2024-12-06 16:45:27.988573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.489 [2024-12-06 16:45:27.988580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.489 [2024-12-06 16:45:27.988585] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.489 [2024-12-06 16:45:27.988590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.489 [2024-12-06 16:45:27.989832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.489 [2024-12-06 16:45:27.989958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.489 [2024-12-06 16:45:27.990104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.489 [2024-12-06 16:45:27.990149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.489 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.489 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:39.489 16:45:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:40.428 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:40.687 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:40.687 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:40.687 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:40.687 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:40.687 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:40.947 Malloc1 00:18:40.947 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:40.947 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:41.207 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:41.207 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:41.207 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:41.466 16:45:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:41.466 Malloc2 00:18:41.466 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:41.726 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:41.726 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:41.987 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:41.987 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:41.987 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:41.987 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:41.987 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:41.987 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:41.987 [2024-12-06 16:45:30.559675] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:18:41.987 [2024-12-06 16:45:30.559704] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2206395 ] 00:18:41.987 [2024-12-06 16:45:30.598267] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:41.987 [2024-12-06 16:45:30.608355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:41.987 [2024-12-06 16:45:30.608370] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdaf37f5000 00:18:41.987 [2024-12-06 16:45:30.609358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.610355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.611355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.612359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.613366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.614376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.615378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.616376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.987 [2024-12-06 16:45:30.617388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:41.987 [2024-12-06 16:45:30.617395] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdaf24fe000 00:18:41.987 [2024-12-06 16:45:30.618307] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:41.987 [2024-12-06 16:45:30.631380] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:41.987 [2024-12-06 16:45:30.631401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:41.987 [2024-12-06 16:45:30.636479] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:41.987 [2024-12-06 16:45:30.636512] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:41.987 [2024-12-06 16:45:30.636575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:41.987 [2024-12-06 16:45:30.636587] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:41.987 [2024-12-06 16:45:30.636591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:41.987 [2024-12-06 16:45:30.637482] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:41.987 [2024-12-06 16:45:30.637489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:41.987 [2024-12-06 16:45:30.637494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:41.987 [2024-12-06 16:45:30.638484] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:41.987 [2024-12-06 16:45:30.638490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:41.987 [2024-12-06 16:45:30.638496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:41.987 [2024-12-06 16:45:30.639489] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:41.987 [2024-12-06 16:45:30.639495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:41.987 [2024-12-06 16:45:30.640494] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:41.987 [2024-12-06 16:45:30.640500] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:41.987 [2024-12-06 16:45:30.640503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:41.987 [2024-12-06 16:45:30.640508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:41.987 [2024-12-06 16:45:30.640614] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:41.987 [2024-12-06 16:45:30.640617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:41.987 [2024-12-06 16:45:30.640621] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:41.987 [2024-12-06 16:45:30.641494] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:41.987 [2024-12-06 16:45:30.642508] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:41.987 [2024-12-06 16:45:30.643508] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:41.987 [2024-12-06 16:45:30.644511] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:41.987 [2024-12-06 16:45:30.644581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:41.987 [2024-12-06 16:45:30.645523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:41.987 [2024-12-06 16:45:30.645529] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:41.987 [2024-12-06 16:45:30.645532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:41.987 [2024-12-06 16:45:30.645547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:41.987 [2024-12-06 16:45:30.645552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:41.987 [2024-12-06 16:45:30.645568] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:41.987 [2024-12-06 16:45:30.645571] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.987 [2024-12-06 16:45:30.645574] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.987 [2024-12-06 16:45:30.645585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.987 [2024-12-06 16:45:30.645628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:41.987 [2024-12-06 16:45:30.645636] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:41.987 [2024-12-06 16:45:30.645642] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:41.987 [2024-12-06 16:45:30.645645] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:41.987 [2024-12-06 16:45:30.645648] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:41.987 [2024-12-06 16:45:30.645652] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:41.988 [2024-12-06 16:45:30.645655] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:41.988 [2024-12-06 16:45:30.645658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.645680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.645688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.988 [2024-12-06 16:45:30.645694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.988 [2024-12-06 16:45:30.645700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.988 [2024-12-06 16:45:30.645706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.988 [2024-12-06 16:45:30.645710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.645735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.645739] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:41.988 [2024-12-06 16:45:30.645742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645748] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.645765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.645808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645819] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:41.988 [2024-12-06 16:45:30.645822] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:41.988 [2024-12-06 16:45:30.645825] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.988 [2024-12-06 16:45:30.645829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.645841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.645848] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:41.988 [2024-12-06 16:45:30.645858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645869] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:41.988 [2024-12-06 16:45:30.645872] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.988 [2024-12-06 16:45:30.645874] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.988 [2024-12-06 16:45:30.645878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.645895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.645905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645916] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:41.988 [2024-12-06 16:45:30.645919] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.988 [2024-12-06 16:45:30.645922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.988 [2024-12-06 16:45:30.645926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.645936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.645942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645969] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:41.988 [2024-12-06 16:45:30.645972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:41.988 [2024-12-06 16:45:30.645976] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:41.988 [2024-12-06 16:45:30.645989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.645996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.646005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.646013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.646021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.646033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.646041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.646049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.646059] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:41.988 [2024-12-06 16:45:30.646062] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:41.988 [2024-12-06 16:45:30.646064] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:41.988 [2024-12-06 16:45:30.646067] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:41.988 [2024-12-06 16:45:30.646069] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:41.988 [2024-12-06 16:45:30.646076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:41.988 [2024-12-06 16:45:30.646081] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:41.988 [2024-12-06 16:45:30.646084] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:41.988 [2024-12-06 16:45:30.646086] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.988 [2024-12-06 16:45:30.646091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.646095] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:41.988 [2024-12-06 16:45:30.646098] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.988 [2024-12-06 16:45:30.646105] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.988 [2024-12-06 16:45:30.646109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.646114] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:41.988 [2024-12-06 16:45:30.646117] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:41.988 [2024-12-06 16:45:30.646120] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.988 [2024-12-06 16:45:30.646124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:41.988 [2024-12-06 16:45:30.646129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.646138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.646146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:41.988 [2024-12-06 16:45:30.646151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:41.988 ===================================================== 00:18:41.989 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:41.989 ===================================================== 00:18:41.989 Controller Capabilities/Features 00:18:41.989 ================================ 00:18:41.989 Vendor ID: 4e58 00:18:41.989 Subsystem Vendor ID: 4e58 00:18:41.989 Serial Number: SPDK1 00:18:41.989 Model Number: SPDK bdev Controller 00:18:41.989 Firmware Version: 25.01 00:18:41.989 Recommended Arb Burst: 6 00:18:41.989 IEEE OUI Identifier: 8d 6b 50 00:18:41.989 Multi-path I/O 00:18:41.989 May have multiple subsystem ports: Yes 00:18:41.989 May have multiple controllers: Yes 00:18:41.989 Associated with SR-IOV VF: No 00:18:41.989 Max Data Transfer Size: 131072 00:18:41.989 Max Number of Namespaces: 32 00:18:41.989 Max Number of I/O Queues: 127 00:18:41.989 NVMe Specification Version (VS): 1.3 00:18:41.989 NVMe Specification Version (Identify): 1.3 00:18:41.989 Maximum Queue Entries: 256 00:18:41.989 Contiguous Queues Required: Yes 00:18:41.989 Arbitration Mechanisms Supported 00:18:41.989 Weighted Round Robin: Not Supported 00:18:41.989 Vendor Specific: Not Supported 00:18:41.989 Reset Timeout: 15000 ms 00:18:41.989 Doorbell Stride: 4 bytes 00:18:41.989 NVM Subsystem Reset: Not Supported 00:18:41.989 Command Sets Supported 00:18:41.989 NVM Command Set: Supported 00:18:41.989 Boot Partition: Not Supported 00:18:41.989 Memory Page Size Minimum: 4096 bytes 00:18:41.989 Memory Page Size Maximum: 4096 bytes 00:18:41.989 Persistent Memory Region: Not Supported 00:18:41.989 Optional Asynchronous Events Supported 00:18:41.989 Namespace Attribute Notices: Supported 00:18:41.989 Firmware Activation Notices: Not Supported 00:18:41.989 ANA Change Notices: Not Supported 00:18:41.989 PLE Aggregate Log Change Notices: Not Supported 00:18:41.989 LBA Status Info Alert Notices: Not Supported 00:18:41.989 EGE Aggregate Log Change Notices: Not Supported 00:18:41.989 Normal NVM Subsystem Shutdown event: Not Supported 00:18:41.989 Zone Descriptor Change Notices: Not Supported 00:18:41.989 Discovery Log Change Notices: Not Supported 00:18:41.989 Controller Attributes 00:18:41.989 128-bit Host Identifier: Supported 00:18:41.989 Non-Operational Permissive Mode: Not Supported 00:18:41.989 NVM Sets: Not Supported 00:18:41.989 Read Recovery Levels: Not Supported 00:18:41.989 Endurance Groups: Not Supported 00:18:41.989 Predictable Latency Mode: Not Supported 00:18:41.989 Traffic Based Keep ALive: Not Supported 00:18:41.989 Namespace Granularity: Not Supported 00:18:41.989 SQ Associations: Not Supported 00:18:41.989 UUID List: Not Supported 00:18:41.989 Multi-Domain Subsystem: Not Supported 00:18:41.989 Fixed Capacity Management: Not Supported 00:18:41.989 Variable Capacity Management: Not Supported 00:18:41.989 Delete Endurance Group: Not Supported 00:18:41.989 Delete NVM Set: Not Supported 00:18:41.989 Extended LBA Formats Supported: Not Supported 00:18:41.989 Flexible Data Placement Supported: Not Supported 00:18:41.989 00:18:41.989 Controller Memory Buffer Support 00:18:41.989 ================================ 00:18:41.989 Supported: No 00:18:41.989 00:18:41.989 Persistent Memory Region Support 00:18:41.989 ================================ 00:18:41.989 Supported: No 00:18:41.989 00:18:41.989 Admin Command Set Attributes 00:18:41.989 ============================ 00:18:41.989 Security Send/Receive: Not Supported 00:18:41.989 Format NVM: Not Supported 00:18:41.989 Firmware Activate/Download: Not Supported 00:18:41.989 Namespace Management: Not Supported 00:18:41.989 Device Self-Test: Not Supported 00:18:41.989 Directives: Not Supported 00:18:41.989 NVMe-MI: Not Supported 00:18:41.989 Virtualization Management: Not Supported 00:18:41.989 Doorbell Buffer Config: Not Supported 00:18:41.989 Get LBA Status Capability: Not Supported 00:18:41.989 Command & Feature Lockdown Capability: Not Supported 00:18:41.989 Abort Command Limit: 4 00:18:41.989 Async Event Request Limit: 4 00:18:41.989 Number of Firmware Slots: N/A 00:18:41.989 Firmware Slot 1 Read-Only: N/A 00:18:41.989 Firmware Activation Without Reset: N/A 00:18:41.989 Multiple Update Detection Support: N/A 00:18:41.989 Firmware Update Granularity: No Information Provided 00:18:41.989 Per-Namespace SMART Log: No 00:18:41.989 Asymmetric Namespace Access Log Page: Not Supported 00:18:41.989 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:41.989 Command Effects Log Page: Supported 00:18:41.989 Get Log Page Extended Data: Supported 00:18:41.989 Telemetry Log Pages: Not Supported 00:18:41.989 Persistent Event Log Pages: Not Supported 00:18:41.989 Supported Log Pages Log Page: May Support 00:18:41.989 Commands Supported & Effects Log Page: Not Supported 00:18:41.989 Feature Identifiers & Effects Log Page:May Support 00:18:41.989 NVMe-MI Commands & Effects Log Page: May Support 00:18:41.989 Data Area 4 for Telemetry Log: Not Supported 00:18:41.989 Error Log Page Entries Supported: 128 00:18:41.989 Keep Alive: Supported 00:18:41.989 Keep Alive Granularity: 10000 ms 00:18:41.989 00:18:41.989 NVM Command Set Attributes 00:18:41.989 ========================== 00:18:41.989 Submission Queue Entry Size 00:18:41.989 Max: 64 00:18:41.989 Min: 64 00:18:41.989 Completion Queue Entry Size 00:18:41.989 Max: 16 00:18:41.989 Min: 16 00:18:41.989 Number of Namespaces: 32 00:18:41.989 Compare Command: Supported 00:18:41.989 Write Uncorrectable Command: Not Supported 00:18:41.989 Dataset Management Command: Supported 00:18:41.989 Write Zeroes Command: Supported 00:18:41.989 Set Features Save Field: Not Supported 00:18:41.989 Reservations: Not Supported 00:18:41.989 Timestamp: Not Supported 00:18:41.989 Copy: Supported 00:18:41.989 Volatile Write Cache: Present 00:18:41.989 Atomic Write Unit (Normal): 1 00:18:41.989 Atomic Write Unit (PFail): 1 00:18:41.989 Atomic Compare & Write Unit: 1 00:18:41.989 Fused Compare & Write: Supported 00:18:41.989 Scatter-Gather List 00:18:41.989 SGL Command Set: Supported (Dword aligned) 00:18:41.989 SGL Keyed: Not Supported 00:18:41.989 SGL Bit Bucket Descriptor: Not Supported 00:18:41.989 SGL Metadata Pointer: Not Supported 00:18:41.989 Oversized SGL: Not Supported 00:18:41.989 SGL Metadata Address: Not Supported 00:18:41.989 SGL Offset: Not Supported 00:18:41.989 Transport SGL Data Block: Not Supported 00:18:41.989 Replay Protected Memory Block: Not Supported 00:18:41.989 00:18:41.989 Firmware Slot Information 00:18:41.989 ========================= 00:18:41.989 Active slot: 1 00:18:41.989 Slot 1 Firmware Revision: 25.01 00:18:41.989 00:18:41.989 00:18:41.989 Commands Supported and Effects 00:18:41.989 ============================== 00:18:41.989 Admin Commands 00:18:41.989 -------------- 00:18:41.989 Get Log Page (02h): Supported 00:18:41.989 Identify (06h): Supported 00:18:41.989 Abort (08h): Supported 00:18:41.989 Set Features (09h): Supported 00:18:41.989 Get Features (0Ah): Supported 00:18:41.989 Asynchronous Event Request (0Ch): Supported 00:18:41.989 Keep Alive (18h): Supported 00:18:41.989 I/O Commands 00:18:41.989 ------------ 00:18:41.989 Flush (00h): Supported LBA-Change 00:18:41.989 Write (01h): Supported LBA-Change 00:18:41.989 Read (02h): Supported 00:18:41.989 Compare (05h): Supported 00:18:41.989 Write Zeroes (08h): Supported LBA-Change 00:18:41.989 Dataset Management (09h): Supported LBA-Change 00:18:41.989 Copy (19h): Supported LBA-Change 00:18:41.989 00:18:41.989 Error Log 00:18:41.989 ========= 00:18:41.989 00:18:41.989 Arbitration 00:18:41.989 =========== 00:18:41.989 Arbitration Burst: 1 00:18:41.989 00:18:41.989 Power Management 00:18:41.989 ================ 00:18:41.989 Number of Power States: 1 00:18:41.989 Current Power State: Power State #0 00:18:41.989 Power State #0: 00:18:41.989 Max Power: 0.00 W 00:18:41.989 Non-Operational State: Operational 00:18:41.989 Entry Latency: Not Reported 00:18:41.989 Exit Latency: Not Reported 00:18:41.989 Relative Read Throughput: 0 00:18:41.989 Relative Read Latency: 0 00:18:41.989 Relative Write Throughput: 0 00:18:41.989 Relative Write Latency: 0 00:18:41.989 Idle Power: Not Reported 00:18:41.989 Active Power: Not Reported 00:18:41.989 Non-Operational Permissive Mode: Not Supported 00:18:41.989 00:18:41.989 Health Information 00:18:41.989 ================== 00:18:41.989 Critical Warnings: 00:18:41.989 Available Spare Space: OK 00:18:41.989 Temperature: OK 00:18:41.989 Device Reliability: OK 00:18:41.989 Read Only: No 00:18:41.989 Volatile Memory Backup: OK 00:18:41.989 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:41.989 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:41.989 Available Spare: 0% 00:18:41.990 Available Sp[2024-12-06 16:45:30.646223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:41.990 [2024-12-06 16:45:30.646228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:41.990 [2024-12-06 16:45:30.646249] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:41.990 [2024-12-06 16:45:30.646256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.990 [2024-12-06 16:45:30.646260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.990 [2024-12-06 16:45:30.646265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.990 [2024-12-06 16:45:30.646269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.990 [2024-12-06 16:45:30.646528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:41.990 [2024-12-06 16:45:30.646535] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:41.990 [2024-12-06 16:45:30.647530] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:41.990 [2024-12-06 16:45:30.647574] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:41.990 [2024-12-06 16:45:30.647580] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:41.990 [2024-12-06 16:45:30.648542] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:41.990 [2024-12-06 16:45:30.648550] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:41.990 [2024-12-06 16:45:30.648605] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:41.990 [2024-12-06 16:45:30.649559] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:42.250 are Threshold: 0% 00:18:42.250 Life Percentage Used: 0% 00:18:42.250 Data Units Read: 0 00:18:42.250 Data Units Written: 0 00:18:42.250 Host Read Commands: 0 00:18:42.250 Host Write Commands: 0 00:18:42.250 Controller Busy Time: 0 minutes 00:18:42.250 Power Cycles: 0 00:18:42.250 Power On Hours: 0 hours 00:18:42.250 Unsafe Shutdowns: 0 00:18:42.250 Unrecoverable Media Errors: 0 00:18:42.250 Lifetime Error Log Entries: 0 00:18:42.250 Warning Temperature Time: 0 minutes 00:18:42.250 Critical Temperature Time: 0 minutes 00:18:42.250 00:18:42.250 Number of Queues 00:18:42.250 ================ 00:18:42.250 Number of I/O Submission Queues: 127 00:18:42.250 Number of I/O Completion Queues: 127 00:18:42.250 00:18:42.250 Active Namespaces 00:18:42.250 ================= 00:18:42.250 Namespace ID:1 00:18:42.250 Error Recovery Timeout: Unlimited 00:18:42.250 Command Set Identifier: NVM (00h) 00:18:42.250 Deallocate: Supported 00:18:42.250 Deallocated/Unwritten Error: Not Supported 00:18:42.250 Deallocated Read Value: Unknown 00:18:42.250 Deallocate in Write Zeroes: Not Supported 00:18:42.250 Deallocated Guard Field: 0xFFFF 00:18:42.250 Flush: Supported 00:18:42.250 Reservation: Supported 00:18:42.250 Namespace Sharing Capabilities: Multiple Controllers 00:18:42.250 Size (in LBAs): 131072 (0GiB) 00:18:42.250 Capacity (in LBAs): 131072 (0GiB) 00:18:42.250 Utilization (in LBAs): 131072 (0GiB) 00:18:42.250 NGUID: 1CBEFDCC29FC414F93DB1F3D290F3AB8 00:18:42.250 UUID: 1cbefdcc-29fc-414f-93db-1f3d290f3ab8 00:18:42.250 Thin Provisioning: Not Supported 00:18:42.250 Per-NS Atomic Units: Yes 00:18:42.250 Atomic Boundary Size (Normal): 0 00:18:42.250 Atomic Boundary Size (PFail): 0 00:18:42.250 Atomic Boundary Offset: 0 00:18:42.250 Maximum Single Source Range Length: 65535 00:18:42.250 Maximum Copy Length: 65535 00:18:42.250 Maximum Source Range Count: 1 00:18:42.250 NGUID/EUI64 Never Reused: No 00:18:42.250 Namespace Write Protected: No 00:18:42.250 Number of LBA Formats: 1 00:18:42.250 Current LBA Format: LBA Format #00 00:18:42.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:42.250 00:18:42.250 16:45:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:42.250 [2024-12-06 16:45:30.819724] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:47.524 Initializing NVMe Controllers 00:18:47.524 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:47.524 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:47.524 Initialization complete. Launching workers. 00:18:47.524 ======================================================== 00:18:47.524 Latency(us) 00:18:47.524 Device Information : IOPS MiB/s Average min max 00:18:47.524 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40066.41 156.51 3194.58 865.64 6883.89 00:18:47.524 ======================================================== 00:18:47.524 Total : 40066.41 156.51 3194.58 865.64 6883.89 00:18:47.524 00:18:47.524 [2024-12-06 16:45:35.841691] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:47.524 16:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:47.524 [2024-12-06 16:45:36.021533] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:52.802 Initializing NVMe Controllers 00:18:52.802 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:52.802 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:52.802 Initialization complete. Launching workers. 00:18:52.802 ======================================================== 00:18:52.802 Latency(us) 00:18:52.802 Device Information : IOPS MiB/s Average min max 00:18:52.802 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16071.00 62.78 7975.66 5776.73 14550.90 00:18:52.802 ======================================================== 00:18:52.802 Total : 16071.00 62.78 7975.66 5776.73 14550.90 00:18:52.802 00:18:52.802 [2024-12-06 16:45:41.059014] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:52.802 16:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:52.802 [2024-12-06 16:45:41.257858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:58.084 [2024-12-06 16:45:46.332338] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:58.084 Initializing NVMe Controllers 00:18:58.084 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:58.084 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:58.084 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:58.084 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:58.084 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:58.084 Initialization complete. Launching workers. 00:18:58.084 Starting thread on core 2 00:18:58.084 Starting thread on core 3 00:18:58.084 Starting thread on core 1 00:18:58.084 16:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:58.084 [2024-12-06 16:45:46.574022] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:01.378 [2024-12-06 16:45:49.633363] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:01.378 Initializing NVMe Controllers 00:19:01.378 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:01.378 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:01.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:01.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:01.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:01.378 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:01.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:01.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:01.378 Initialization complete. Launching workers. 00:19:01.378 Starting thread on core 1 with urgent priority queue 00:19:01.378 Starting thread on core 2 with urgent priority queue 00:19:01.378 Starting thread on core 3 with urgent priority queue 00:19:01.378 Starting thread on core 0 with urgent priority queue 00:19:01.378 SPDK bdev Controller (SPDK1 ) core 0: 11592.00 IO/s 8.63 secs/100000 ios 00:19:01.378 SPDK bdev Controller (SPDK1 ) core 1: 10111.33 IO/s 9.89 secs/100000 ios 00:19:01.378 SPDK bdev Controller (SPDK1 ) core 2: 10716.00 IO/s 9.33 secs/100000 ios 00:19:01.378 SPDK bdev Controller (SPDK1 ) core 3: 13011.00 IO/s 7.69 secs/100000 ios 00:19:01.378 ======================================================== 00:19:01.378 00:19:01.378 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:01.378 [2024-12-06 16:45:49.870540] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:01.378 Initializing NVMe Controllers 00:19:01.378 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:01.378 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:01.378 Namespace ID: 1 size: 0GB 00:19:01.378 Initialization complete. 00:19:01.378 INFO: using host memory buffer for IO 00:19:01.378 Hello world! 00:19:01.378 [2024-12-06 16:45:49.904765] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:01.378 16:45:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:01.637 [2024-12-06 16:45:50.142539] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:02.576 Initializing NVMe Controllers 00:19:02.576 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:02.576 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:02.576 Initialization complete. Launching workers. 00:19:02.576 submit (in ns) avg, min, max = 6374.5, 2819.2, 6990778.3 00:19:02.576 complete (in ns) avg, min, max = 17573.3, 1640.0, 3999069.2 00:19:02.576 00:19:02.576 Submit histogram 00:19:02.576 ================ 00:19:02.576 Range in us Cumulative Count 00:19:02.576 2.813 - 2.827: 0.0254% ( 5) 00:19:02.576 2.827 - 2.840: 0.3508% ( 64) 00:19:02.576 2.840 - 2.853: 2.0033% ( 325) 00:19:02.576 2.853 - 2.867: 4.6014% ( 511) 00:19:02.576 2.867 - 2.880: 8.9587% ( 857) 00:19:02.576 2.880 - 2.893: 14.0940% ( 1010) 00:19:02.576 2.893 - 2.907: 19.3207% ( 1028) 00:19:02.576 2.907 - 2.920: 25.3813% ( 1192) 00:19:02.576 2.920 - 2.933: 32.1334% ( 1328) 00:19:02.576 2.933 - 2.947: 38.7838% ( 1308) 00:19:02.576 2.947 - 2.960: 45.8918% ( 1398) 00:19:02.576 2.960 - 2.973: 53.0710% ( 1412) 00:19:02.576 2.973 - 2.987: 62.0399% ( 1764) 00:19:02.576 2.987 - 3.000: 70.4495% ( 1654) 00:19:02.576 3.000 - 3.013: 79.0421% ( 1690) 00:19:02.576 3.013 - 3.027: 85.9315% ( 1355) 00:19:02.576 3.027 - 3.040: 91.2497% ( 1046) 00:19:02.576 3.040 - 3.053: 94.9766% ( 733) 00:19:02.576 3.053 - 3.067: 97.3256% ( 462) 00:19:02.576 3.067 - 3.080: 98.5306% ( 237) 00:19:02.576 3.080 - 3.093: 99.0950% ( 111) 00:19:02.576 3.093 - 3.107: 99.3543% ( 51) 00:19:02.576 3.107 - 3.120: 99.4814% ( 25) 00:19:02.576 3.120 - 3.133: 99.5119% ( 6) 00:19:02.576 3.133 - 3.147: 99.5322% ( 4) 00:19:02.576 3.173 - 3.187: 99.5424% ( 2) 00:19:02.576 3.493 - 3.520: 99.5526% ( 2) 00:19:02.576 3.520 - 3.547: 99.5577% ( 1) 00:19:02.576 3.573 - 3.600: 99.5627% ( 1) 00:19:02.576 3.760 - 3.787: 99.5678% ( 1) 00:19:02.576 3.973 - 4.000: 99.5729% ( 1) 00:19:02.576 4.053 - 4.080: 99.5780% ( 1) 00:19:02.576 4.400 - 4.427: 99.5831% ( 1) 00:19:02.576 4.427 - 4.453: 99.5882% ( 1) 00:19:02.576 4.480 - 4.507: 99.5932% ( 1) 00:19:02.576 4.507 - 4.533: 99.5983% ( 1) 00:19:02.576 4.667 - 4.693: 99.6034% ( 1) 00:19:02.576 4.693 - 4.720: 99.6085% ( 1) 00:19:02.576 4.800 - 4.827: 99.6136% ( 1) 00:19:02.576 4.853 - 4.880: 99.6187% ( 1) 00:19:02.576 4.880 - 4.907: 99.6288% ( 2) 00:19:02.576 4.907 - 4.933: 99.6339% ( 1) 00:19:02.576 4.933 - 4.960: 99.6390% ( 1) 00:19:02.576 4.987 - 5.013: 99.6492% ( 2) 00:19:02.576 5.013 - 5.040: 99.6543% ( 1) 00:19:02.576 5.040 - 5.067: 99.6593% ( 1) 00:19:02.576 5.120 - 5.147: 99.6644% ( 1) 00:19:02.576 5.413 - 5.440: 99.6695% ( 1) 00:19:02.576 5.520 - 5.547: 99.6746% ( 1) 00:19:02.576 5.547 - 5.573: 99.6797% ( 1) 00:19:02.576 5.573 - 5.600: 99.6899% ( 2) 00:19:02.576 5.627 - 5.653: 99.6949% ( 1) 00:19:02.576 5.653 - 5.680: 99.7000% ( 1) 00:19:02.576 5.787 - 5.813: 99.7102% ( 2) 00:19:02.576 5.813 - 5.840: 99.7153% ( 1) 00:19:02.576 5.840 - 5.867: 99.7204% ( 1) 00:19:02.576 5.867 - 5.893: 99.7254% ( 1) 00:19:02.576 5.893 - 5.920: 99.7356% ( 2) 00:19:02.576 5.920 - 5.947: 99.7407% ( 1) 00:19:02.576 5.947 - 5.973: 99.7559% ( 3) 00:19:02.576 5.973 - 6.000: 99.7610% ( 1) 00:19:02.576 6.027 - 6.053: 99.7712% ( 2) 00:19:02.576 6.080 - 6.107: 99.7763% ( 1) 00:19:02.576 6.107 - 6.133: 99.7915% ( 3) 00:19:02.576 6.160 - 6.187: 99.7966% ( 1) 00:19:02.576 6.187 - 6.213: 99.8017% ( 1) 00:19:02.576 6.267 - 6.293: 99.8119% ( 2) 00:19:02.576 6.320 - 6.347: 99.8271% ( 3) 00:19:02.576 6.347 - 6.373: 99.8322% ( 1) 00:19:02.576 6.427 - 6.453: 99.8373% ( 1) 00:19:02.576 6.507 - 6.533: 99.8424% ( 1) 00:19:02.576 6.613 - 6.640: 99.8475% ( 1) 00:19:02.576 6.667 - 6.693: 99.8526% ( 1) 00:19:02.576 6.693 - 6.720: 99.8627% ( 2) 00:19:02.576 6.720 - 6.747: 99.8678% ( 1) 00:19:02.576 6.773 - 6.800: 99.8729% ( 1) 00:19:02.576 6.933 - 6.987: 99.8780% ( 1) 00:19:02.577 6.987 - 7.040: 99.8881% ( 2) 00:19:02.577 7.147 - 7.200: 99.8932% ( 1) 00:19:02.577 [2024-12-06 16:45:51.158282] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:02.577 7.253 - 7.307: 99.8983% ( 1) 00:19:02.577 7.307 - 7.360: 99.9034% ( 1) 00:19:02.577 7.573 - 7.627: 99.9085% ( 1) 00:19:02.577 8.427 - 8.480: 99.9136% ( 1) 00:19:02.577 8.480 - 8.533: 99.9186% ( 1) 00:19:02.577 3986.773 - 4014.080: 99.9949% ( 15) 00:19:02.577 6990.507 - 7045.120: 100.0000% ( 1) 00:19:02.577 00:19:02.577 Complete histogram 00:19:02.577 ================== 00:19:02.577 Range in us Cumulative Count 00:19:02.577 1.640 - 1.647: 0.0102% ( 2) 00:19:02.577 1.647 - 1.653: 0.0203% ( 2) 00:19:02.577 1.653 - 1.660: 0.7423% ( 142) 00:19:02.577 1.660 - 1.667: 0.9050% ( 32) 00:19:02.577 1.667 - 1.673: 1.0067% ( 20) 00:19:02.577 1.673 - 1.680: 1.1897% ( 36) 00:19:02.577 1.680 - 1.687: 1.2203% ( 6) 00:19:02.577 1.687 - 1.693: 1.2558% ( 7) 00:19:02.577 1.693 - 1.700: 25.1830% ( 4706) 00:19:02.577 1.700 - 1.707: 38.6618% ( 2651) 00:19:02.577 1.707 - 1.720: 64.9278% ( 5166) 00:19:02.577 1.720 - 1.733: 78.6913% ( 2707) 00:19:02.577 1.733 - 1.747: 83.4096% ( 928) 00:19:02.577 1.747 - 1.760: 85.3773% ( 387) 00:19:02.577 1.760 - 1.773: 89.3278% ( 777) 00:19:02.577 1.773 - 1.787: 94.2953% ( 977) 00:19:02.577 1.787 - 1.800: 97.7781% ( 685) 00:19:02.577 1.800 - 1.813: 99.1255% ( 265) 00:19:02.577 1.813 - 1.827: 99.4458% ( 63) 00:19:02.577 1.827 - 1.840: 99.4611% ( 3) 00:19:02.577 1.840 - 1.853: 99.4661% ( 1) 00:19:02.577 1.853 - 1.867: 99.4763% ( 2) 00:19:02.577 1.867 - 1.880: 99.4814% ( 1) 00:19:02.577 1.987 - 2.000: 99.4865% ( 1) 00:19:02.577 3.360 - 3.373: 99.4916% ( 1) 00:19:02.577 4.160 - 4.187: 99.5017% ( 2) 00:19:02.577 4.400 - 4.427: 99.5119% ( 2) 00:19:02.577 4.560 - 4.587: 99.5170% ( 1) 00:19:02.577 4.640 - 4.667: 99.5272% ( 2) 00:19:02.577 4.693 - 4.720: 99.5322% ( 1) 00:19:02.577 4.747 - 4.773: 99.5373% ( 1) 00:19:02.577 4.880 - 4.907: 99.5475% ( 2) 00:19:02.577 4.907 - 4.933: 99.5526% ( 1) 00:19:02.577 5.333 - 5.360: 99.5577% ( 1) 00:19:02.577 5.413 - 5.440: 99.5627% ( 1) 00:19:02.577 5.440 - 5.467: 99.5678% ( 1) 00:19:02.577 5.520 - 5.547: 99.5729% ( 1) 00:19:02.577 5.653 - 5.680: 99.5780% ( 1) 00:19:02.577 5.787 - 5.813: 99.5831% ( 1) 00:19:02.577 6.133 - 6.160: 99.5882% ( 1) 00:19:02.577 12.160 - 12.213: 99.5932% ( 1) 00:19:02.577 12.320 - 12.373: 99.5983% ( 1) 00:19:02.577 127.147 - 128.000: 99.6034% ( 1) 00:19:02.577 3986.773 - 4014.080: 100.0000% ( 78) 00:19:02.577 00:19:02.577 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:02.577 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:02.577 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:02.577 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:02.577 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:02.838 [ 00:19:02.838 { 00:19:02.838 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:02.838 "subtype": "Discovery", 00:19:02.838 "listen_addresses": [], 00:19:02.838 "allow_any_host": true, 00:19:02.838 "hosts": [] 00:19:02.838 }, 00:19:02.838 { 00:19:02.838 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:02.838 "subtype": "NVMe", 00:19:02.838 "listen_addresses": [ 00:19:02.838 { 00:19:02.838 "trtype": "VFIOUSER", 00:19:02.838 "adrfam": "IPv4", 00:19:02.838 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:02.838 "trsvcid": "0" 00:19:02.838 } 00:19:02.838 ], 00:19:02.838 "allow_any_host": true, 00:19:02.838 "hosts": [], 00:19:02.838 "serial_number": "SPDK1", 00:19:02.838 "model_number": "SPDK bdev Controller", 00:19:02.838 "max_namespaces": 32, 00:19:02.838 "min_cntlid": 1, 00:19:02.838 "max_cntlid": 65519, 00:19:02.838 "namespaces": [ 00:19:02.838 { 00:19:02.838 "nsid": 1, 00:19:02.838 "bdev_name": "Malloc1", 00:19:02.838 "name": "Malloc1", 00:19:02.838 "nguid": "1CBEFDCC29FC414F93DB1F3D290F3AB8", 00:19:02.838 "uuid": "1cbefdcc-29fc-414f-93db-1f3d290f3ab8" 00:19:02.838 } 00:19:02.838 ] 00:19:02.838 }, 00:19:02.838 { 00:19:02.838 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:02.838 "subtype": "NVMe", 00:19:02.838 "listen_addresses": [ 00:19:02.838 { 00:19:02.838 "trtype": "VFIOUSER", 00:19:02.838 "adrfam": "IPv4", 00:19:02.838 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:02.838 "trsvcid": "0" 00:19:02.838 } 00:19:02.838 ], 00:19:02.838 "allow_any_host": true, 00:19:02.838 "hosts": [], 00:19:02.838 "serial_number": "SPDK2", 00:19:02.838 "model_number": "SPDK bdev Controller", 00:19:02.838 "max_namespaces": 32, 00:19:02.838 "min_cntlid": 1, 00:19:02.838 "max_cntlid": 65519, 00:19:02.838 "namespaces": [ 00:19:02.838 { 00:19:02.838 "nsid": 1, 00:19:02.838 "bdev_name": "Malloc2", 00:19:02.838 "name": "Malloc2", 00:19:02.838 "nguid": "4B6DA3680FB646A8964D0A752CA3B1CF", 00:19:02.838 "uuid": "4b6da368-0fb6-46a8-964d-0a752ca3b1cf" 00:19:02.839 } 00:19:02.839 ] 00:19:02.839 } 00:19:02.839 ] 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2210905 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:02.839 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:02.839 [2024-12-06 16:45:51.512502] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:02.839 Malloc3 00:19:03.099 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:03.099 [2024-12-06 16:45:51.674557] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:03.099 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:03.099 Asynchronous Event Request test 00:19:03.099 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:03.099 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:03.099 Registering asynchronous event callbacks... 00:19:03.099 Starting namespace attribute notice tests for all controllers... 00:19:03.099 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:03.099 aer_cb - Changed Namespace 00:19:03.099 Cleaning up... 00:19:03.361 [ 00:19:03.361 { 00:19:03.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:03.361 "subtype": "Discovery", 00:19:03.361 "listen_addresses": [], 00:19:03.361 "allow_any_host": true, 00:19:03.361 "hosts": [] 00:19:03.361 }, 00:19:03.361 { 00:19:03.361 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:03.361 "subtype": "NVMe", 00:19:03.361 "listen_addresses": [ 00:19:03.361 { 00:19:03.361 "trtype": "VFIOUSER", 00:19:03.361 "adrfam": "IPv4", 00:19:03.361 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:03.361 "trsvcid": "0" 00:19:03.361 } 00:19:03.361 ], 00:19:03.361 "allow_any_host": true, 00:19:03.361 "hosts": [], 00:19:03.361 "serial_number": "SPDK1", 00:19:03.361 "model_number": "SPDK bdev Controller", 00:19:03.361 "max_namespaces": 32, 00:19:03.361 "min_cntlid": 1, 00:19:03.361 "max_cntlid": 65519, 00:19:03.361 "namespaces": [ 00:19:03.361 { 00:19:03.361 "nsid": 1, 00:19:03.361 "bdev_name": "Malloc1", 00:19:03.361 "name": "Malloc1", 00:19:03.361 "nguid": "1CBEFDCC29FC414F93DB1F3D290F3AB8", 00:19:03.361 "uuid": "1cbefdcc-29fc-414f-93db-1f3d290f3ab8" 00:19:03.361 }, 00:19:03.361 { 00:19:03.361 "nsid": 2, 00:19:03.361 "bdev_name": "Malloc3", 00:19:03.361 "name": "Malloc3", 00:19:03.361 "nguid": "15659F2DBDA94249B6DCBEF097C926E7", 00:19:03.361 "uuid": "15659f2d-bda9-4249-b6dc-bef097c926e7" 00:19:03.361 } 00:19:03.361 ] 00:19:03.361 }, 00:19:03.361 { 00:19:03.361 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:03.361 "subtype": "NVMe", 00:19:03.361 "listen_addresses": [ 00:19:03.361 { 00:19:03.361 "trtype": "VFIOUSER", 00:19:03.361 "adrfam": "IPv4", 00:19:03.361 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:03.361 "trsvcid": "0" 00:19:03.361 } 00:19:03.361 ], 00:19:03.361 "allow_any_host": true, 00:19:03.361 "hosts": [], 00:19:03.361 "serial_number": "SPDK2", 00:19:03.361 "model_number": "SPDK bdev Controller", 00:19:03.361 "max_namespaces": 32, 00:19:03.361 "min_cntlid": 1, 00:19:03.361 "max_cntlid": 65519, 00:19:03.361 "namespaces": [ 00:19:03.361 { 00:19:03.361 "nsid": 1, 00:19:03.361 "bdev_name": "Malloc2", 00:19:03.361 "name": "Malloc2", 00:19:03.361 "nguid": "4B6DA3680FB646A8964D0A752CA3B1CF", 00:19:03.361 "uuid": "4b6da368-0fb6-46a8-964d-0a752ca3b1cf" 00:19:03.361 } 00:19:03.361 ] 00:19:03.361 } 00:19:03.361 ] 00:19:03.361 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2210905 00:19:03.362 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:03.362 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:03.362 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:03.362 16:45:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:03.362 [2024-12-06 16:45:51.861360] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:03.362 [2024-12-06 16:45:51.861389] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2211067 ] 00:19:03.362 [2024-12-06 16:45:51.900761] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:03.362 [2024-12-06 16:45:51.909276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:03.362 [2024-12-06 16:45:51.909293] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1004d1d000 00:19:03.362 [2024-12-06 16:45:51.910268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.911277] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.912281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.913293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.914301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.915305] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.916313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.917315] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:03.362 [2024-12-06 16:45:51.918327] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:03.362 [2024-12-06 16:45:51.918334] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1003a26000 00:19:03.362 [2024-12-06 16:45:51.919244] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:03.362 [2024-12-06 16:45:51.932621] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:03.362 [2024-12-06 16:45:51.932642] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:03.362 [2024-12-06 16:45:51.934694] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:03.362 [2024-12-06 16:45:51.934724] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:03.362 [2024-12-06 16:45:51.934780] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:03.362 [2024-12-06 16:45:51.934788] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:03.362 [2024-12-06 16:45:51.934792] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:03.362 [2024-12-06 16:45:51.935704] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:03.362 [2024-12-06 16:45:51.935711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:03.362 [2024-12-06 16:45:51.935716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:03.362 [2024-12-06 16:45:51.936708] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:03.362 [2024-12-06 16:45:51.936714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:03.362 [2024-12-06 16:45:51.936720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:03.362 [2024-12-06 16:45:51.937717] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:03.362 [2024-12-06 16:45:51.937723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:03.362 [2024-12-06 16:45:51.938721] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:03.362 [2024-12-06 16:45:51.938727] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:03.362 [2024-12-06 16:45:51.938731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:03.362 [2024-12-06 16:45:51.938736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:03.362 [2024-12-06 16:45:51.938842] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:03.362 [2024-12-06 16:45:51.938845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:03.362 [2024-12-06 16:45:51.938849] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:03.362 [2024-12-06 16:45:51.939729] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:03.362 [2024-12-06 16:45:51.940736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:03.362 [2024-12-06 16:45:51.941742] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:03.362 [2024-12-06 16:45:51.942746] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:03.362 [2024-12-06 16:45:51.942776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:03.362 [2024-12-06 16:45:51.943752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:03.362 [2024-12-06 16:45:51.943758] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:03.362 [2024-12-06 16:45:51.943761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:03.362 [2024-12-06 16:45:51.943776] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:03.362 [2024-12-06 16:45:51.943784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:03.362 [2024-12-06 16:45:51.943794] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:03.362 [2024-12-06 16:45:51.943798] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.362 [2024-12-06 16:45:51.943800] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.362 [2024-12-06 16:45:51.943809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.362 [2024-12-06 16:45:51.950107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:03.362 [2024-12-06 16:45:51.950115] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:03.362 [2024-12-06 16:45:51.950121] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:03.362 [2024-12-06 16:45:51.950125] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:03.363 [2024-12-06 16:45:51.950128] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:03.363 [2024-12-06 16:45:51.950131] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:03.363 [2024-12-06 16:45:51.950134] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:03.363 [2024-12-06 16:45:51.950138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.950143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.950150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:51.958104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:51.958113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.363 [2024-12-06 16:45:51.958119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.363 [2024-12-06 16:45:51.958125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.363 [2024-12-06 16:45:51.958131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:03.363 [2024-12-06 16:45:51.958135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.958141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.958147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:51.966105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:51.966110] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:03.363 [2024-12-06 16:45:51.966114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.966119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.966123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.966129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:51.974105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:51.974150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.974159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.974164] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:03.363 [2024-12-06 16:45:51.974167] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:03.363 [2024-12-06 16:45:51.974170] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.363 [2024-12-06 16:45:51.974174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:51.982105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:51.982112] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:03.363 [2024-12-06 16:45:51.982119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.982125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.982129] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:03.363 [2024-12-06 16:45:51.982132] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.363 [2024-12-06 16:45:51.982135] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.363 [2024-12-06 16:45:51.982139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:51.990106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:51.990116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.990122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.990126] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:03.363 [2024-12-06 16:45:51.990129] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.363 [2024-12-06 16:45:51.990132] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.363 [2024-12-06 16:45:51.990136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:51.998105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:51.998112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.998117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.998122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.998127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.998131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.998136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.998140] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:03.363 [2024-12-06 16:45:51.998143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:03.363 [2024-12-06 16:45:51.998147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:03.363 [2024-12-06 16:45:51.998159] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:52.006106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:52.006116] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:52.014104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:52.014120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:52.022104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:52.022113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:03.363 [2024-12-06 16:45:52.030104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:03.363 [2024-12-06 16:45:52.030115] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:03.364 [2024-12-06 16:45:52.030119] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:03.364 [2024-12-06 16:45:52.030121] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:03.364 [2024-12-06 16:45:52.030124] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:03.364 [2024-12-06 16:45:52.030126] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:03.364 [2024-12-06 16:45:52.030131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:03.364 [2024-12-06 16:45:52.030136] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:03.364 [2024-12-06 16:45:52.030139] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:03.364 [2024-12-06 16:45:52.030141] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.364 [2024-12-06 16:45:52.030146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:03.364 [2024-12-06 16:45:52.030151] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:03.364 [2024-12-06 16:45:52.030154] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:03.364 [2024-12-06 16:45:52.030156] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.364 [2024-12-06 16:45:52.030160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:03.364 [2024-12-06 16:45:52.030166] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:03.364 [2024-12-06 16:45:52.030168] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:03.364 [2024-12-06 16:45:52.030172] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:03.364 [2024-12-06 16:45:52.030176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:03.364 [2024-12-06 16:45:52.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:03.364 [2024-12-06 16:45:52.038117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:03.364 [2024-12-06 16:45:52.038124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:03.364 [2024-12-06 16:45:52.038129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:03.364 ===================================================== 00:19:03.364 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:03.364 ===================================================== 00:19:03.364 Controller Capabilities/Features 00:19:03.364 ================================ 00:19:03.364 Vendor ID: 4e58 00:19:03.364 Subsystem Vendor ID: 4e58 00:19:03.364 Serial Number: SPDK2 00:19:03.364 Model Number: SPDK bdev Controller 00:19:03.364 Firmware Version: 25.01 00:19:03.364 Recommended Arb Burst: 6 00:19:03.364 IEEE OUI Identifier: 8d 6b 50 00:19:03.364 Multi-path I/O 00:19:03.364 May have multiple subsystem ports: Yes 00:19:03.364 May have multiple controllers: Yes 00:19:03.364 Associated with SR-IOV VF: No 00:19:03.364 Max Data Transfer Size: 131072 00:19:03.364 Max Number of Namespaces: 32 00:19:03.364 Max Number of I/O Queues: 127 00:19:03.364 NVMe Specification Version (VS): 1.3 00:19:03.364 NVMe Specification Version (Identify): 1.3 00:19:03.364 Maximum Queue Entries: 256 00:19:03.364 Contiguous Queues Required: Yes 00:19:03.364 Arbitration Mechanisms Supported 00:19:03.364 Weighted Round Robin: Not Supported 00:19:03.364 Vendor Specific: Not Supported 00:19:03.364 Reset Timeout: 15000 ms 00:19:03.364 Doorbell Stride: 4 bytes 00:19:03.364 NVM Subsystem Reset: Not Supported 00:19:03.364 Command Sets Supported 00:19:03.364 NVM Command Set: Supported 00:19:03.364 Boot Partition: Not Supported 00:19:03.364 Memory Page Size Minimum: 4096 bytes 00:19:03.364 Memory Page Size Maximum: 4096 bytes 00:19:03.364 Persistent Memory Region: Not Supported 00:19:03.364 Optional Asynchronous Events Supported 00:19:03.364 Namespace Attribute Notices: Supported 00:19:03.364 Firmware Activation Notices: Not Supported 00:19:03.364 ANA Change Notices: Not Supported 00:19:03.364 PLE Aggregate Log Change Notices: Not Supported 00:19:03.364 LBA Status Info Alert Notices: Not Supported 00:19:03.364 EGE Aggregate Log Change Notices: Not Supported 00:19:03.364 Normal NVM Subsystem Shutdown event: Not Supported 00:19:03.364 Zone Descriptor Change Notices: Not Supported 00:19:03.364 Discovery Log Change Notices: Not Supported 00:19:03.364 Controller Attributes 00:19:03.364 128-bit Host Identifier: Supported 00:19:03.364 Non-Operational Permissive Mode: Not Supported 00:19:03.364 NVM Sets: Not Supported 00:19:03.364 Read Recovery Levels: Not Supported 00:19:03.364 Endurance Groups: Not Supported 00:19:03.364 Predictable Latency Mode: Not Supported 00:19:03.364 Traffic Based Keep ALive: Not Supported 00:19:03.364 Namespace Granularity: Not Supported 00:19:03.364 SQ Associations: Not Supported 00:19:03.364 UUID List: Not Supported 00:19:03.364 Multi-Domain Subsystem: Not Supported 00:19:03.364 Fixed Capacity Management: Not Supported 00:19:03.364 Variable Capacity Management: Not Supported 00:19:03.364 Delete Endurance Group: Not Supported 00:19:03.364 Delete NVM Set: Not Supported 00:19:03.364 Extended LBA Formats Supported: Not Supported 00:19:03.364 Flexible Data Placement Supported: Not Supported 00:19:03.364 00:19:03.364 Controller Memory Buffer Support 00:19:03.364 ================================ 00:19:03.364 Supported: No 00:19:03.364 00:19:03.364 Persistent Memory Region Support 00:19:03.364 ================================ 00:19:03.364 Supported: No 00:19:03.364 00:19:03.364 Admin Command Set Attributes 00:19:03.364 ============================ 00:19:03.364 Security Send/Receive: Not Supported 00:19:03.364 Format NVM: Not Supported 00:19:03.364 Firmware Activate/Download: Not Supported 00:19:03.364 Namespace Management: Not Supported 00:19:03.364 Device Self-Test: Not Supported 00:19:03.364 Directives: Not Supported 00:19:03.364 NVMe-MI: Not Supported 00:19:03.364 Virtualization Management: Not Supported 00:19:03.364 Doorbell Buffer Config: Not Supported 00:19:03.364 Get LBA Status Capability: Not Supported 00:19:03.364 Command & Feature Lockdown Capability: Not Supported 00:19:03.364 Abort Command Limit: 4 00:19:03.364 Async Event Request Limit: 4 00:19:03.364 Number of Firmware Slots: N/A 00:19:03.364 Firmware Slot 1 Read-Only: N/A 00:19:03.364 Firmware Activation Without Reset: N/A 00:19:03.364 Multiple Update Detection Support: N/A 00:19:03.365 Firmware Update Granularity: No Information Provided 00:19:03.365 Per-Namespace SMART Log: No 00:19:03.365 Asymmetric Namespace Access Log Page: Not Supported 00:19:03.365 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:03.365 Command Effects Log Page: Supported 00:19:03.365 Get Log Page Extended Data: Supported 00:19:03.365 Telemetry Log Pages: Not Supported 00:19:03.365 Persistent Event Log Pages: Not Supported 00:19:03.365 Supported Log Pages Log Page: May Support 00:19:03.365 Commands Supported & Effects Log Page: Not Supported 00:19:03.365 Feature Identifiers & Effects Log Page:May Support 00:19:03.365 NVMe-MI Commands & Effects Log Page: May Support 00:19:03.365 Data Area 4 for Telemetry Log: Not Supported 00:19:03.365 Error Log Page Entries Supported: 128 00:19:03.365 Keep Alive: Supported 00:19:03.365 Keep Alive Granularity: 10000 ms 00:19:03.365 00:19:03.365 NVM Command Set Attributes 00:19:03.365 ========================== 00:19:03.365 Submission Queue Entry Size 00:19:03.365 Max: 64 00:19:03.365 Min: 64 00:19:03.365 Completion Queue Entry Size 00:19:03.365 Max: 16 00:19:03.365 Min: 16 00:19:03.365 Number of Namespaces: 32 00:19:03.365 Compare Command: Supported 00:19:03.365 Write Uncorrectable Command: Not Supported 00:19:03.365 Dataset Management Command: Supported 00:19:03.365 Write Zeroes Command: Supported 00:19:03.365 Set Features Save Field: Not Supported 00:19:03.365 Reservations: Not Supported 00:19:03.365 Timestamp: Not Supported 00:19:03.365 Copy: Supported 00:19:03.365 Volatile Write Cache: Present 00:19:03.365 Atomic Write Unit (Normal): 1 00:19:03.365 Atomic Write Unit (PFail): 1 00:19:03.365 Atomic Compare & Write Unit: 1 00:19:03.365 Fused Compare & Write: Supported 00:19:03.365 Scatter-Gather List 00:19:03.365 SGL Command Set: Supported (Dword aligned) 00:19:03.365 SGL Keyed: Not Supported 00:19:03.365 SGL Bit Bucket Descriptor: Not Supported 00:19:03.365 SGL Metadata Pointer: Not Supported 00:19:03.365 Oversized SGL: Not Supported 00:19:03.365 SGL Metadata Address: Not Supported 00:19:03.365 SGL Offset: Not Supported 00:19:03.365 Transport SGL Data Block: Not Supported 00:19:03.365 Replay Protected Memory Block: Not Supported 00:19:03.365 00:19:03.365 Firmware Slot Information 00:19:03.365 ========================= 00:19:03.365 Active slot: 1 00:19:03.365 Slot 1 Firmware Revision: 25.01 00:19:03.365 00:19:03.365 00:19:03.365 Commands Supported and Effects 00:19:03.365 ============================== 00:19:03.365 Admin Commands 00:19:03.365 -------------- 00:19:03.365 Get Log Page (02h): Supported 00:19:03.365 Identify (06h): Supported 00:19:03.365 Abort (08h): Supported 00:19:03.365 Set Features (09h): Supported 00:19:03.365 Get Features (0Ah): Supported 00:19:03.365 Asynchronous Event Request (0Ch): Supported 00:19:03.365 Keep Alive (18h): Supported 00:19:03.365 I/O Commands 00:19:03.365 ------------ 00:19:03.365 Flush (00h): Supported LBA-Change 00:19:03.365 Write (01h): Supported LBA-Change 00:19:03.365 Read (02h): Supported 00:19:03.365 Compare (05h): Supported 00:19:03.365 Write Zeroes (08h): Supported LBA-Change 00:19:03.365 Dataset Management (09h): Supported LBA-Change 00:19:03.365 Copy (19h): Supported LBA-Change 00:19:03.365 00:19:03.365 Error Log 00:19:03.365 ========= 00:19:03.365 00:19:03.365 Arbitration 00:19:03.365 =========== 00:19:03.365 Arbitration Burst: 1 00:19:03.365 00:19:03.365 Power Management 00:19:03.365 ================ 00:19:03.365 Number of Power States: 1 00:19:03.365 Current Power State: Power State #0 00:19:03.365 Power State #0: 00:19:03.365 Max Power: 0.00 W 00:19:03.365 Non-Operational State: Operational 00:19:03.365 Entry Latency: Not Reported 00:19:03.365 Exit Latency: Not Reported 00:19:03.365 Relative Read Throughput: 0 00:19:03.365 Relative Read Latency: 0 00:19:03.365 Relative Write Throughput: 0 00:19:03.365 Relative Write Latency: 0 00:19:03.365 Idle Power: Not Reported 00:19:03.365 Active Power: Not Reported 00:19:03.365 Non-Operational Permissive Mode: Not Supported 00:19:03.365 00:19:03.365 Health Information 00:19:03.365 ================== 00:19:03.365 Critical Warnings: 00:19:03.365 Available Spare Space: OK 00:19:03.365 Temperature: OK 00:19:03.365 Device Reliability: OK 00:19:03.365 Read Only: No 00:19:03.365 Volatile Memory Backup: OK 00:19:03.365 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:03.365 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:03.365 Available Spare: 0% 00:19:03.365 Available Sp[2024-12-06 16:45:52.038200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:03.365 [2024-12-06 16:45:52.046105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:03.365 [2024-12-06 16:45:52.046128] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:03.365 [2024-12-06 16:45:52.046134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.365 [2024-12-06 16:45:52.046139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.365 [2024-12-06 16:45:52.046143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.365 [2024-12-06 16:45:52.046148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:03.365 [2024-12-06 16:45:52.046186] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:03.366 [2024-12-06 16:45:52.046193] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:03.366 [2024-12-06 16:45:52.047193] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:03.366 [2024-12-06 16:45:52.047228] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:03.366 [2024-12-06 16:45:52.047233] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:03.366 [2024-12-06 16:45:52.048195] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:03.366 [2024-12-06 16:45:52.048203] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:03.366 [2024-12-06 16:45:52.048245] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:03.366 [2024-12-06 16:45:52.051106] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:03.627 are Threshold: 0% 00:19:03.627 Life Percentage Used: 0% 00:19:03.627 Data Units Read: 0 00:19:03.627 Data Units Written: 0 00:19:03.627 Host Read Commands: 0 00:19:03.627 Host Write Commands: 0 00:19:03.627 Controller Busy Time: 0 minutes 00:19:03.627 Power Cycles: 0 00:19:03.627 Power On Hours: 0 hours 00:19:03.627 Unsafe Shutdowns: 0 00:19:03.627 Unrecoverable Media Errors: 0 00:19:03.627 Lifetime Error Log Entries: 0 00:19:03.627 Warning Temperature Time: 0 minutes 00:19:03.627 Critical Temperature Time: 0 minutes 00:19:03.627 00:19:03.627 Number of Queues 00:19:03.627 ================ 00:19:03.627 Number of I/O Submission Queues: 127 00:19:03.627 Number of I/O Completion Queues: 127 00:19:03.627 00:19:03.627 Active Namespaces 00:19:03.627 ================= 00:19:03.627 Namespace ID:1 00:19:03.627 Error Recovery Timeout: Unlimited 00:19:03.627 Command Set Identifier: NVM (00h) 00:19:03.627 Deallocate: Supported 00:19:03.627 Deallocated/Unwritten Error: Not Supported 00:19:03.627 Deallocated Read Value: Unknown 00:19:03.627 Deallocate in Write Zeroes: Not Supported 00:19:03.627 Deallocated Guard Field: 0xFFFF 00:19:03.627 Flush: Supported 00:19:03.627 Reservation: Supported 00:19:03.627 Namespace Sharing Capabilities: Multiple Controllers 00:19:03.627 Size (in LBAs): 131072 (0GiB) 00:19:03.627 Capacity (in LBAs): 131072 (0GiB) 00:19:03.627 Utilization (in LBAs): 131072 (0GiB) 00:19:03.627 NGUID: 4B6DA3680FB646A8964D0A752CA3B1CF 00:19:03.627 UUID: 4b6da368-0fb6-46a8-964d-0a752ca3b1cf 00:19:03.627 Thin Provisioning: Not Supported 00:19:03.627 Per-NS Atomic Units: Yes 00:19:03.627 Atomic Boundary Size (Normal): 0 00:19:03.627 Atomic Boundary Size (PFail): 0 00:19:03.627 Atomic Boundary Offset: 0 00:19:03.627 Maximum Single Source Range Length: 65535 00:19:03.627 Maximum Copy Length: 65535 00:19:03.627 Maximum Source Range Count: 1 00:19:03.627 NGUID/EUI64 Never Reused: No 00:19:03.627 Namespace Write Protected: No 00:19:03.627 Number of LBA Formats: 1 00:19:03.627 Current LBA Format: LBA Format #00 00:19:03.627 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:03.627 00:19:03.627 16:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:03.627 [2024-12-06 16:45:52.218487] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:08.905 Initializing NVMe Controllers 00:19:08.905 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:08.905 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:08.905 Initialization complete. Launching workers. 00:19:08.905 ======================================================== 00:19:08.905 Latency(us) 00:19:08.905 Device Information : IOPS MiB/s Average min max 00:19:08.905 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40047.02 156.43 3195.90 862.07 7969.79 00:19:08.905 ======================================================== 00:19:08.905 Total : 40047.02 156.43 3195.90 862.07 7969.79 00:19:08.905 00:19:08.905 [2024-12-06 16:45:57.322286] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:08.905 16:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:08.905 [2024-12-06 16:45:57.493825] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:14.187 Initializing NVMe Controllers 00:19:14.187 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:14.187 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:14.187 Initialization complete. Launching workers. 00:19:14.187 ======================================================== 00:19:14.187 Latency(us) 00:19:14.187 Device Information : IOPS MiB/s Average min max 00:19:14.187 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40018.98 156.32 3201.11 862.13 6803.77 00:19:14.187 ======================================================== 00:19:14.187 Total : 40018.98 156.32 3201.11 862.13 6803.77 00:19:14.187 00:19:14.187 [2024-12-06 16:46:02.514185] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:14.187 16:46:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:14.187 [2024-12-06 16:46:02.714385] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:19.456 [2024-12-06 16:46:07.861190] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:19.456 Initializing NVMe Controllers 00:19:19.456 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:19.456 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:19.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:19.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:19.456 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:19.456 Initialization complete. Launching workers. 00:19:19.456 Starting thread on core 2 00:19:19.456 Starting thread on core 3 00:19:19.456 Starting thread on core 1 00:19:19.456 16:46:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:19.456 [2024-12-06 16:46:08.105531] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.751 [2024-12-06 16:46:11.162921] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:22.751 Initializing NVMe Controllers 00:19:22.751 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.751 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.751 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:22.751 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:22.751 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:22.751 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:22.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:22.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:22.751 Initialization complete. Launching workers. 00:19:22.751 Starting thread on core 1 with urgent priority queue 00:19:22.751 Starting thread on core 2 with urgent priority queue 00:19:22.751 Starting thread on core 3 with urgent priority queue 00:19:22.751 Starting thread on core 0 with urgent priority queue 00:19:22.751 SPDK bdev Controller (SPDK2 ) core 0: 9700.00 IO/s 10.31 secs/100000 ios 00:19:22.751 SPDK bdev Controller (SPDK2 ) core 1: 11918.33 IO/s 8.39 secs/100000 ios 00:19:22.751 SPDK bdev Controller (SPDK2 ) core 2: 11439.67 IO/s 8.74 secs/100000 ios 00:19:22.751 SPDK bdev Controller (SPDK2 ) core 3: 13616.00 IO/s 7.34 secs/100000 ios 00:19:22.751 ======================================================== 00:19:22.751 00:19:22.751 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:22.751 [2024-12-06 16:46:11.395436] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.751 Initializing NVMe Controllers 00:19:22.751 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.751 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.751 Namespace ID: 1 size: 0GB 00:19:22.751 Initialization complete. 00:19:22.751 INFO: using host memory buffer for IO 00:19:22.751 Hello world! 00:19:22.751 [2024-12-06 16:46:11.405510] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:23.010 16:46:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:23.010 [2024-12-06 16:46:11.635488] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:24.389 Initializing NVMe Controllers 00:19:24.389 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.389 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.389 Initialization complete. Launching workers. 00:19:24.389 submit (in ns) avg, min, max = 5361.3, 2812.5, 3999020.0 00:19:24.389 complete (in ns) avg, min, max = 16487.9, 1636.7, 7987258.3 00:19:24.389 00:19:24.389 Submit histogram 00:19:24.389 ================ 00:19:24.389 Range in us Cumulative Count 00:19:24.389 2.800 - 2.813: 0.0050% ( 1) 00:19:24.389 2.813 - 2.827: 0.3440% ( 68) 00:19:24.389 2.827 - 2.840: 1.2613% ( 184) 00:19:24.389 2.840 - 2.853: 3.1907% ( 387) 00:19:24.389 2.853 - 2.867: 6.5610% ( 676) 00:19:24.389 2.867 - 2.880: 10.5943% ( 809) 00:19:24.389 2.880 - 2.893: 15.3954% ( 963) 00:19:24.389 2.893 - 2.907: 20.6900% ( 1062) 00:19:24.389 2.907 - 2.920: 26.2937% ( 1124) 00:19:24.389 2.920 - 2.933: 32.5406% ( 1253) 00:19:24.389 2.933 - 2.947: 38.4934% ( 1194) 00:19:24.389 2.947 - 2.960: 45.5878% ( 1423) 00:19:24.389 2.960 - 2.973: 53.4500% ( 1577) 00:19:24.389 2.973 - 2.987: 62.5486% ( 1825) 00:19:24.389 2.987 - 3.000: 70.9642% ( 1688) 00:19:24.389 3.000 - 3.013: 78.6918% ( 1550) 00:19:24.389 3.013 - 3.027: 85.6317% ( 1392) 00:19:24.389 3.027 - 3.040: 91.0908% ( 1095) 00:19:24.389 3.040 - 3.053: 95.1890% ( 822) 00:19:24.389 3.053 - 3.067: 96.9837% ( 360) 00:19:24.389 3.067 - 3.080: 98.2351% ( 251) 00:19:24.389 3.080 - 3.093: 98.8633% ( 126) 00:19:24.389 3.093 - 3.107: 99.2921% ( 86) 00:19:24.389 3.107 - 3.120: 99.5064% ( 43) 00:19:24.389 3.120 - 3.133: 99.5413% ( 7) 00:19:24.389 3.133 - 3.147: 99.5563% ( 3) 00:19:24.389 3.147 - 3.160: 99.5762% ( 4) 00:19:24.389 3.227 - 3.240: 99.5812% ( 1) 00:19:24.389 3.307 - 3.320: 99.5862% ( 1) 00:19:24.389 3.387 - 3.400: 99.5912% ( 1) 00:19:24.389 3.493 - 3.520: 99.5962% ( 1) 00:19:24.389 3.627 - 3.653: 99.6012% ( 1) 00:19:24.390 3.653 - 3.680: 99.6061% ( 1) 00:19:24.390 3.680 - 3.707: 99.6111% ( 1) 00:19:24.390 3.707 - 3.733: 99.6161% ( 1) 00:19:24.390 4.000 - 4.027: 99.6211% ( 1) 00:19:24.390 4.240 - 4.267: 99.6261% ( 1) 00:19:24.390 4.533 - 4.560: 99.6311% ( 1) 00:19:24.390 4.613 - 4.640: 99.6361% ( 1) 00:19:24.390 4.773 - 4.800: 99.6460% ( 2) 00:19:24.390 4.800 - 4.827: 99.6510% ( 1) 00:19:24.390 4.853 - 4.880: 99.6560% ( 1) 00:19:24.390 4.960 - 4.987: 99.6660% ( 2) 00:19:24.390 4.987 - 5.013: 99.6759% ( 2) 00:19:24.390 5.013 - 5.040: 99.6859% ( 2) 00:19:24.390 5.067 - 5.093: 99.6909% ( 1) 00:19:24.390 5.173 - 5.200: 99.6959% ( 1) 00:19:24.390 5.253 - 5.280: 99.7009% ( 1) 00:19:24.390 5.307 - 5.333: 99.7059% ( 1) 00:19:24.390 5.333 - 5.360: 99.7108% ( 1) 00:19:24.390 5.360 - 5.387: 99.7158% ( 1) 00:19:24.390 5.387 - 5.413: 99.7208% ( 1) 00:19:24.390 5.413 - 5.440: 99.7258% ( 1) 00:19:24.390 5.440 - 5.467: 99.7308% ( 1) 00:19:24.390 5.547 - 5.573: 99.7358% ( 1) 00:19:24.390 5.573 - 5.600: 99.7457% ( 2) 00:19:24.390 5.600 - 5.627: 99.7507% ( 1) 00:19:24.390 5.787 - 5.813: 99.7557% ( 1) 00:19:24.390 5.840 - 5.867: 99.7757% ( 4) 00:19:24.390 5.867 - 5.893: 99.7806% ( 1) 00:19:24.390 5.893 - 5.920: 99.7856% ( 1) 00:19:24.390 6.053 - 6.080: 99.7906% ( 1) 00:19:24.390 6.080 - 6.107: 99.8056% ( 3) 00:19:24.390 6.133 - 6.160: 99.8105% ( 1) 00:19:24.390 6.160 - 6.187: 99.8255% ( 3) 00:19:24.390 6.240 - 6.267: 99.8305% ( 1) 00:19:24.390 6.267 - 6.293: 99.8355% ( 1) 00:19:24.390 6.293 - 6.320: 99.8454% ( 2) 00:19:24.390 6.320 - 6.347: 99.8504% ( 1) 00:19:24.390 6.347 - 6.373: 99.8554% ( 1) 00:19:24.390 6.480 - 6.507: 99.8604% ( 1) 00:19:24.390 6.507 - 6.533: 99.8654% ( 1) 00:19:24.390 6.533 - 6.560: 99.8704% ( 1) 00:19:24.390 6.560 - 6.587: 99.8754% ( 1) 00:19:24.390 6.667 - 6.693: 99.8803% ( 1) 00:19:24.390 6.827 - 6.880: 99.8853% ( 1) 00:19:24.390 6.933 - 6.987: 99.8903% ( 1) 00:19:24.390 6.987 - 7.040: 99.9003% ( 2) 00:19:24.390 7.040 - 7.093: 99.9053% ( 1) 00:19:24.390 [2024-12-06 16:46:12.726619] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:24.390 7.200 - 7.253: 99.9103% ( 1) 00:19:24.390 7.307 - 7.360: 99.9202% ( 2) 00:19:24.390 7.520 - 7.573: 99.9252% ( 1) 00:19:24.390 8.320 - 8.373: 99.9302% ( 1) 00:19:24.390 8.693 - 8.747: 99.9352% ( 1) 00:19:24.390 10.667 - 10.720: 99.9402% ( 1) 00:19:24.390 3986.773 - 4014.080: 100.0000% ( 12) 00:19:24.390 00:19:24.390 Complete histogram 00:19:24.390 ================== 00:19:24.390 Range in us Cumulative Count 00:19:24.390 1.633 - 1.640: 0.4686% ( 94) 00:19:24.390 1.640 - 1.647: 1.1517% ( 137) 00:19:24.390 1.647 - 1.653: 1.2314% ( 16) 00:19:24.390 1.653 - 1.660: 1.3910% ( 32) 00:19:24.390 1.660 - 1.667: 1.4657% ( 15) 00:19:24.390 1.667 - 1.673: 1.4857% ( 4) 00:19:24.390 1.673 - 1.680: 1.5156% ( 6) 00:19:24.390 1.680 - 1.687: 1.5256% ( 2) 00:19:24.390 1.687 - 1.693: 9.2183% ( 1543) 00:19:24.390 1.693 - 1.700: 41.5744% ( 6490) 00:19:24.390 1.700 - 1.707: 48.0357% ( 1296) 00:19:24.390 1.707 - 1.720: 73.8907% ( 5186) 00:19:24.390 1.720 - 1.733: 83.1539% ( 1858) 00:19:24.390 1.733 - 1.747: 84.7243% ( 315) 00:19:24.390 1.747 - 1.760: 87.0326% ( 463) 00:19:24.390 1.760 - 1.773: 91.3601% ( 868) 00:19:24.390 1.773 - 1.787: 95.7324% ( 877) 00:19:24.390 1.787 - 1.800: 98.4794% ( 551) 00:19:24.390 1.800 - 1.813: 99.3220% ( 169) 00:19:24.390 1.813 - 1.827: 99.4267% ( 21) 00:19:24.390 1.827 - 1.840: 99.4466% ( 4) 00:19:24.390 1.933 - 1.947: 99.4516% ( 1) 00:19:24.390 3.347 - 3.360: 99.4566% ( 1) 00:19:24.390 3.440 - 3.467: 99.4616% ( 1) 00:19:24.390 4.053 - 4.080: 99.4665% ( 1) 00:19:24.390 4.107 - 4.133: 99.4715% ( 1) 00:19:24.390 4.160 - 4.187: 99.4765% ( 1) 00:19:24.390 4.293 - 4.320: 99.4815% ( 1) 00:19:24.390 4.480 - 4.507: 99.4865% ( 1) 00:19:24.390 4.533 - 4.560: 99.4915% ( 1) 00:19:24.390 4.613 - 4.640: 99.4965% ( 1) 00:19:24.390 4.640 - 4.667: 99.5014% ( 1) 00:19:24.390 4.667 - 4.693: 99.5064% ( 1) 00:19:24.390 4.720 - 4.747: 99.5214% ( 3) 00:19:24.390 4.747 - 4.773: 99.5264% ( 1) 00:19:24.390 4.853 - 4.880: 99.5314% ( 1) 00:19:24.390 4.933 - 4.960: 99.5413% ( 2) 00:19:24.390 4.960 - 4.987: 99.5463% ( 1) 00:19:24.390 4.987 - 5.013: 99.5513% ( 1) 00:19:24.390 5.120 - 5.147: 99.5563% ( 1) 00:19:24.390 5.387 - 5.413: 99.5613% ( 1) 00:19:24.390 5.520 - 5.547: 99.5712% ( 2) 00:19:24.390 5.600 - 5.627: 99.5762% ( 1) 00:19:24.390 5.653 - 5.680: 99.5812% ( 1) 00:19:24.390 5.733 - 5.760: 99.5912% ( 2) 00:19:24.390 5.867 - 5.893: 99.5962% ( 1) 00:19:24.390 5.893 - 5.920: 99.6012% ( 1) 00:19:24.390 6.560 - 6.587: 99.6061% ( 1) 00:19:24.390 7.093 - 7.147: 99.6111% ( 1) 00:19:24.390 11.307 - 11.360: 99.6161% ( 1) 00:19:24.390 34.133 - 34.347: 99.6211% ( 1) 00:19:24.390 41.387 - 41.600: 99.6261% ( 1) 00:19:24.390 46.720 - 46.933: 99.6311% ( 1) 00:19:24.390 488.107 - 491.520: 99.6361% ( 1) 00:19:24.390 3986.773 - 4014.080: 99.9950% ( 72) 00:19:24.390 7973.547 - 8028.160: 100.0000% ( 1) 00:19:24.390 00:19:24.390 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:24.390 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:24.390 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:24.390 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:24.391 [ 00:19:24.391 { 00:19:24.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:24.391 "subtype": "Discovery", 00:19:24.391 "listen_addresses": [], 00:19:24.391 "allow_any_host": true, 00:19:24.391 "hosts": [] 00:19:24.391 }, 00:19:24.391 { 00:19:24.391 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:24.391 "subtype": "NVMe", 00:19:24.391 "listen_addresses": [ 00:19:24.391 { 00:19:24.391 "trtype": "VFIOUSER", 00:19:24.391 "adrfam": "IPv4", 00:19:24.391 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:24.391 "trsvcid": "0" 00:19:24.391 } 00:19:24.391 ], 00:19:24.391 "allow_any_host": true, 00:19:24.391 "hosts": [], 00:19:24.391 "serial_number": "SPDK1", 00:19:24.391 "model_number": "SPDK bdev Controller", 00:19:24.391 "max_namespaces": 32, 00:19:24.391 "min_cntlid": 1, 00:19:24.391 "max_cntlid": 65519, 00:19:24.391 "namespaces": [ 00:19:24.391 { 00:19:24.391 "nsid": 1, 00:19:24.391 "bdev_name": "Malloc1", 00:19:24.391 "name": "Malloc1", 00:19:24.391 "nguid": "1CBEFDCC29FC414F93DB1F3D290F3AB8", 00:19:24.391 "uuid": "1cbefdcc-29fc-414f-93db-1f3d290f3ab8" 00:19:24.391 }, 00:19:24.391 { 00:19:24.391 "nsid": 2, 00:19:24.391 "bdev_name": "Malloc3", 00:19:24.391 "name": "Malloc3", 00:19:24.391 "nguid": "15659F2DBDA94249B6DCBEF097C926E7", 00:19:24.391 "uuid": "15659f2d-bda9-4249-b6dc-bef097c926e7" 00:19:24.391 } 00:19:24.391 ] 00:19:24.391 }, 00:19:24.391 { 00:19:24.391 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:24.391 "subtype": "NVMe", 00:19:24.391 "listen_addresses": [ 00:19:24.391 { 00:19:24.391 "trtype": "VFIOUSER", 00:19:24.391 "adrfam": "IPv4", 00:19:24.391 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:24.391 "trsvcid": "0" 00:19:24.391 } 00:19:24.391 ], 00:19:24.391 "allow_any_host": true, 00:19:24.391 "hosts": [], 00:19:24.391 "serial_number": "SPDK2", 00:19:24.391 "model_number": "SPDK bdev Controller", 00:19:24.391 "max_namespaces": 32, 00:19:24.391 "min_cntlid": 1, 00:19:24.391 "max_cntlid": 65519, 00:19:24.391 "namespaces": [ 00:19:24.391 { 00:19:24.391 "nsid": 1, 00:19:24.391 "bdev_name": "Malloc2", 00:19:24.391 "name": "Malloc2", 00:19:24.391 "nguid": "4B6DA3680FB646A8964D0A752CA3B1CF", 00:19:24.391 "uuid": "4b6da368-0fb6-46a8-964d-0a752ca3b1cf" 00:19:24.391 } 00:19:24.391 ] 00:19:24.391 } 00:19:24.391 ] 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2215727 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:24.391 16:46:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:24.391 [2024-12-06 16:46:13.070441] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:24.650 Malloc4 00:19:24.650 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:24.650 [2024-12-06 16:46:13.247735] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:24.650 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:24.650 Asynchronous Event Request test 00:19:24.650 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.650 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:24.650 Registering asynchronous event callbacks... 00:19:24.650 Starting namespace attribute notice tests for all controllers... 00:19:24.650 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:24.650 aer_cb - Changed Namespace 00:19:24.650 Cleaning up... 00:19:24.910 [ 00:19:24.910 { 00:19:24.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:24.910 "subtype": "Discovery", 00:19:24.910 "listen_addresses": [], 00:19:24.910 "allow_any_host": true, 00:19:24.910 "hosts": [] 00:19:24.910 }, 00:19:24.910 { 00:19:24.910 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:24.910 "subtype": "NVMe", 00:19:24.910 "listen_addresses": [ 00:19:24.910 { 00:19:24.910 "trtype": "VFIOUSER", 00:19:24.910 "adrfam": "IPv4", 00:19:24.910 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:24.910 "trsvcid": "0" 00:19:24.910 } 00:19:24.910 ], 00:19:24.910 "allow_any_host": true, 00:19:24.910 "hosts": [], 00:19:24.910 "serial_number": "SPDK1", 00:19:24.910 "model_number": "SPDK bdev Controller", 00:19:24.910 "max_namespaces": 32, 00:19:24.910 "min_cntlid": 1, 00:19:24.910 "max_cntlid": 65519, 00:19:24.910 "namespaces": [ 00:19:24.910 { 00:19:24.910 "nsid": 1, 00:19:24.910 "bdev_name": "Malloc1", 00:19:24.910 "name": "Malloc1", 00:19:24.910 "nguid": "1CBEFDCC29FC414F93DB1F3D290F3AB8", 00:19:24.910 "uuid": "1cbefdcc-29fc-414f-93db-1f3d290f3ab8" 00:19:24.910 }, 00:19:24.910 { 00:19:24.910 "nsid": 2, 00:19:24.910 "bdev_name": "Malloc3", 00:19:24.910 "name": "Malloc3", 00:19:24.910 "nguid": "15659F2DBDA94249B6DCBEF097C926E7", 00:19:24.910 "uuid": "15659f2d-bda9-4249-b6dc-bef097c926e7" 00:19:24.910 } 00:19:24.910 ] 00:19:24.910 }, 00:19:24.910 { 00:19:24.910 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:24.910 "subtype": "NVMe", 00:19:24.910 "listen_addresses": [ 00:19:24.910 { 00:19:24.910 "trtype": "VFIOUSER", 00:19:24.910 "adrfam": "IPv4", 00:19:24.910 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:24.910 "trsvcid": "0" 00:19:24.910 } 00:19:24.910 ], 00:19:24.910 "allow_any_host": true, 00:19:24.910 "hosts": [], 00:19:24.910 "serial_number": "SPDK2", 00:19:24.910 "model_number": "SPDK bdev Controller", 00:19:24.910 "max_namespaces": 32, 00:19:24.910 "min_cntlid": 1, 00:19:24.910 "max_cntlid": 65519, 00:19:24.910 "namespaces": [ 00:19:24.910 { 00:19:24.910 "nsid": 1, 00:19:24.910 "bdev_name": "Malloc2", 00:19:24.910 "name": "Malloc2", 00:19:24.910 "nguid": "4B6DA3680FB646A8964D0A752CA3B1CF", 00:19:24.910 "uuid": "4b6da368-0fb6-46a8-964d-0a752ca3b1cf" 00:19:24.910 }, 00:19:24.910 { 00:19:24.910 "nsid": 2, 00:19:24.910 "bdev_name": "Malloc4", 00:19:24.910 "name": "Malloc4", 00:19:24.910 "nguid": "0C48CC90E61F48CC87A7F61E842343D4", 00:19:24.910 "uuid": "0c48cc90-e61f-48cc-87a7-f61e842343d4" 00:19:24.910 } 00:19:24.910 ] 00:19:24.910 } 00:19:24.910 ] 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2215727 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2205709 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2205709 ']' 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2205709 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2205709 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2205709' 00:19:24.910 killing process with pid 2205709 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2205709 00:19:24.910 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2205709 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2215750 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2215750' 00:19:25.170 Process pid: 2215750 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2215750 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2215750 ']' 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:25.170 [2024-12-06 16:46:13.645215] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:25.170 [2024-12-06 16:46:13.646147] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:25.170 [2024-12-06 16:46:13.646189] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:25.170 [2024-12-06 16:46:13.710795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:25.170 [2024-12-06 16:46:13.726692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:25.170 [2024-12-06 16:46:13.726723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:25.170 [2024-12-06 16:46:13.726728] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:25.170 [2024-12-06 16:46:13.726733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:25.170 [2024-12-06 16:46:13.726737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:25.170 [2024-12-06 16:46:13.727971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:25.170 [2024-12-06 16:46:13.728170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.170 [2024-12-06 16:46:13.728501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.170 [2024-12-06 16:46:13.728501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:25.170 [2024-12-06 16:46:13.775044] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:25.170 [2024-12-06 16:46:13.775797] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:25.170 [2024-12-06 16:46:13.775896] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:25.170 [2024-12-06 16:46:13.776039] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:25.170 [2024-12-06 16:46:13.776046] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:25.170 16:46:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:26.107 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:26.366 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:26.366 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:26.366 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:26.366 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:26.366 16:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:26.625 Malloc1 00:19:26.625 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:26.625 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:26.884 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:27.142 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:27.142 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:27.142 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:27.142 Malloc2 00:19:27.142 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:27.400 16:46:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2215750 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2215750 ']' 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2215750 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2215750 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2215750' 00:19:27.659 killing process with pid 2215750 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2215750 00:19:27.659 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2215750 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:27.919 00:19:27.919 real 0m48.726s 00:19:27.919 user 3m9.023s 00:19:27.919 sys 0m2.379s 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:27.919 ************************************ 00:19:27.919 END TEST nvmf_vfio_user 00:19:27.919 ************************************ 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.919 ************************************ 00:19:27.919 START TEST nvmf_vfio_user_nvme_compliance 00:19:27.919 ************************************ 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:27.919 * Looking for test storage... 00:19:27.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.919 --rc genhtml_branch_coverage=1 00:19:27.919 --rc genhtml_function_coverage=1 00:19:27.919 --rc genhtml_legend=1 00:19:27.919 --rc geninfo_all_blocks=1 00:19:27.919 --rc geninfo_unexecuted_blocks=1 00:19:27.919 00:19:27.919 ' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.919 --rc genhtml_branch_coverage=1 00:19:27.919 --rc genhtml_function_coverage=1 00:19:27.919 --rc genhtml_legend=1 00:19:27.919 --rc geninfo_all_blocks=1 00:19:27.919 --rc geninfo_unexecuted_blocks=1 00:19:27.919 00:19:27.919 ' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.919 --rc genhtml_branch_coverage=1 00:19:27.919 --rc genhtml_function_coverage=1 00:19:27.919 --rc genhtml_legend=1 00:19:27.919 --rc geninfo_all_blocks=1 00:19:27.919 --rc geninfo_unexecuted_blocks=1 00:19:27.919 00:19:27.919 ' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:27.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.919 --rc genhtml_branch_coverage=1 00:19:27.919 --rc genhtml_function_coverage=1 00:19:27.919 --rc genhtml_legend=1 00:19:27.919 --rc geninfo_all_blocks=1 00:19:27.919 --rc geninfo_unexecuted_blocks=1 00:19:27.919 00:19:27.919 ' 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.919 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.178 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2216493 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2216493' 00:19:28.179 Process pid: 2216493 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2216493 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2216493 ']' 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:28.179 [2024-12-06 16:46:16.663108] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:19:28.179 [2024-12-06 16:46:16.663185] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.179 [2024-12-06 16:46:16.734053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:28.179 [2024-12-06 16:46:16.754843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.179 [2024-12-06 16:46:16.754883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.179 [2024-12-06 16:46:16.754890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.179 [2024-12-06 16:46:16.754895] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.179 [2024-12-06 16:46:16.754899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.179 [2024-12-06 16:46:16.756353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.179 [2024-12-06 16:46:16.756510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.179 [2024-12-06 16:46:16.756513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:28.179 16:46:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 malloc0 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.555 16:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:29.555 00:19:29.555 00:19:29.555 CUnit - A unit testing framework for C - Version 2.1-3 00:19:29.555 http://cunit.sourceforge.net/ 00:19:29.555 00:19:29.555 00:19:29.555 Suite: nvme_compliance 00:19:29.555 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-06 16:46:18.045503] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.555 [2024-12-06 16:46:18.046800] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:29.555 [2024-12-06 16:46:18.046811] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:29.555 [2024-12-06 16:46:18.046816] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:29.555 [2024-12-06 16:46:18.048525] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.555 passed 00:19:29.555 Test: admin_identify_ctrlr_verify_fused ...[2024-12-06 16:46:18.128022] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.555 [2024-12-06 16:46:18.131044] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.555 passed 00:19:29.555 Test: admin_identify_ns ...[2024-12-06 16:46:18.210386] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.814 [2024-12-06 16:46:18.271112] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:29.814 [2024-12-06 16:46:18.279115] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:29.814 [2024-12-06 16:46:18.300187] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.814 passed 00:19:29.814 Test: admin_get_features_mandatory_features ...[2024-12-06 16:46:18.374410] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.814 [2024-12-06 16:46:18.377428] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.814 passed 00:19:29.814 Test: admin_get_features_optional_features ...[2024-12-06 16:46:18.456926] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:29.814 [2024-12-06 16:46:18.459938] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:29.814 passed 00:19:30.072 Test: admin_set_features_number_of_queues ...[2024-12-06 16:46:18.534709] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.072 [2024-12-06 16:46:18.643205] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.072 passed 00:19:30.072 Test: admin_get_log_page_mandatory_logs ...[2024-12-06 16:46:18.717438] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.072 [2024-12-06 16:46:18.720468] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.072 passed 00:19:30.330 Test: admin_get_log_page_with_lpo ...[2024-12-06 16:46:18.797731] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.330 [2024-12-06 16:46:18.866109] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:30.330 [2024-12-06 16:46:18.879164] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.330 passed 00:19:30.330 Test: fabric_property_get ...[2024-12-06 16:46:18.953395] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.330 [2024-12-06 16:46:18.954598] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:30.330 [2024-12-06 16:46:18.956415] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.330 passed 00:19:30.589 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-06 16:46:19.034855] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.589 [2024-12-06 16:46:19.036061] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:30.589 [2024-12-06 16:46:19.037880] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.589 passed 00:19:30.589 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-06 16:46:19.113438] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.589 [2024-12-06 16:46:19.201104] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:30.589 [2024-12-06 16:46:19.220110] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:30.589 [2024-12-06 16:46:19.225181] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.589 passed 00:19:30.848 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-06 16:46:19.301220] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.848 [2024-12-06 16:46:19.302426] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:30.848 [2024-12-06 16:46:19.304239] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.848 passed 00:19:30.848 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-06 16:46:19.382476] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.848 [2024-12-06 16:46:19.459109] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:30.848 [2024-12-06 16:46:19.483108] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:30.848 [2024-12-06 16:46:19.488172] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.848 passed 00:19:31.108 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-06 16:46:19.563369] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.108 [2024-12-06 16:46:19.564570] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:31.108 [2024-12-06 16:46:19.564587] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:31.108 [2024-12-06 16:46:19.566390] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.108 passed 00:19:31.108 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-06 16:46:19.643132] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.108 [2024-12-06 16:46:19.736106] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:31.108 [2024-12-06 16:46:19.744105] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:31.108 [2024-12-06 16:46:19.752104] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:31.108 [2024-12-06 16:46:19.760105] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:31.108 [2024-12-06 16:46:19.789180] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.368 passed 00:19:31.368 Test: admin_create_io_sq_verify_pc ...[2024-12-06 16:46:19.862381] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.368 [2024-12-06 16:46:19.879116] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:31.368 [2024-12-06 16:46:19.896541] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.368 passed 00:19:31.368 Test: admin_create_io_qp_max_qps ...[2024-12-06 16:46:19.977040] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:32.745 [2024-12-06 16:46:21.086110] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:33.004 [2024-12-06 16:46:21.475749] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.004 passed 00:19:33.004 Test: admin_create_io_sq_shared_cq ...[2024-12-06 16:46:21.553941] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.004 [2024-12-06 16:46:21.686112] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:33.263 [2024-12-06 16:46:21.723158] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.263 passed 00:19:33.263 00:19:33.263 Run Summary: Type Total Ran Passed Failed Inactive 00:19:33.263 suites 1 1 n/a 0 0 00:19:33.263 tests 18 18 18 0 0 00:19:33.263 asserts 360 360 360 0 n/a 00:19:33.263 00:19:33.263 Elapsed time = 1.511 seconds 00:19:33.263 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2216493 00:19:33.263 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2216493 ']' 00:19:33.263 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2216493 00:19:33.263 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:33.263 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.263 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216493 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216493' 00:19:33.264 killing process with pid 2216493 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2216493 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2216493 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:33.264 00:19:33.264 real 0m5.438s 00:19:33.264 user 0m15.485s 00:19:33.264 sys 0m0.414s 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:33.264 ************************************ 00:19:33.264 END TEST nvmf_vfio_user_nvme_compliance 00:19:33.264 ************************************ 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.264 16:46:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.525 ************************************ 00:19:33.525 START TEST nvmf_vfio_user_fuzz 00:19:33.525 ************************************ 00:19:33.525 16:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:33.525 * Looking for test storage... 00:19:33.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.525 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:33.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.526 --rc genhtml_branch_coverage=1 00:19:33.526 --rc genhtml_function_coverage=1 00:19:33.526 --rc genhtml_legend=1 00:19:33.526 --rc geninfo_all_blocks=1 00:19:33.526 --rc geninfo_unexecuted_blocks=1 00:19:33.526 00:19:33.526 ' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:33.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.526 --rc genhtml_branch_coverage=1 00:19:33.526 --rc genhtml_function_coverage=1 00:19:33.526 --rc genhtml_legend=1 00:19:33.526 --rc geninfo_all_blocks=1 00:19:33.526 --rc geninfo_unexecuted_blocks=1 00:19:33.526 00:19:33.526 ' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:33.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.526 --rc genhtml_branch_coverage=1 00:19:33.526 --rc genhtml_function_coverage=1 00:19:33.526 --rc genhtml_legend=1 00:19:33.526 --rc geninfo_all_blocks=1 00:19:33.526 --rc geninfo_unexecuted_blocks=1 00:19:33.526 00:19:33.526 ' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:33.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.526 --rc genhtml_branch_coverage=1 00:19:33.526 --rc genhtml_function_coverage=1 00:19:33.526 --rc genhtml_legend=1 00:19:33.526 --rc geninfo_all_blocks=1 00:19:33.526 --rc geninfo_unexecuted_blocks=1 00:19:33.526 00:19:33.526 ' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2217887 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2217887' 00:19:33.526 Process pid: 2217887 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2217887 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2217887 ']' 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:33.526 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:33.785 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.785 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:33.785 16:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:34.859 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:34.859 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.859 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:34.860 malloc0 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:34.860 16:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:06.988 Fuzzing completed. Shutting down the fuzz application 00:20:06.988 00:20:06.988 Dumping successful admin opcodes: 00:20:06.988 9, 10, 00:20:06.988 Dumping successful io opcodes: 00:20:06.988 0, 00:20:06.988 NS: 0x20000081ef00 I/O qp, Total commands completed: 1299133, total successful commands: 5093, random_seed: 51478400 00:20:06.988 NS: 0x20000081ef00 admin qp, Total commands completed: 317072, total successful commands: 82, random_seed: 2182502848 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2217887 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2217887 ']' 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2217887 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2217887 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.988 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2217887' 00:20:06.989 killing process with pid 2217887 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2217887 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2217887 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:06.989 00:20:06.989 real 0m31.939s 00:20:06.989 user 0m33.558s 00:20:06.989 sys 0m26.865s 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:06.989 ************************************ 00:20:06.989 END TEST nvmf_vfio_user_fuzz 00:20:06.989 ************************************ 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:06.989 ************************************ 00:20:06.989 START TEST nvmf_auth_target 00:20:06.989 ************************************ 00:20:06.989 16:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:06.989 * Looking for test storage... 00:20:06.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:06.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.989 --rc genhtml_branch_coverage=1 00:20:06.989 --rc genhtml_function_coverage=1 00:20:06.989 --rc genhtml_legend=1 00:20:06.989 --rc geninfo_all_blocks=1 00:20:06.989 --rc geninfo_unexecuted_blocks=1 00:20:06.989 00:20:06.989 ' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:06.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.989 --rc genhtml_branch_coverage=1 00:20:06.989 --rc genhtml_function_coverage=1 00:20:06.989 --rc genhtml_legend=1 00:20:06.989 --rc geninfo_all_blocks=1 00:20:06.989 --rc geninfo_unexecuted_blocks=1 00:20:06.989 00:20:06.989 ' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:06.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.989 --rc genhtml_branch_coverage=1 00:20:06.989 --rc genhtml_function_coverage=1 00:20:06.989 --rc genhtml_legend=1 00:20:06.989 --rc geninfo_all_blocks=1 00:20:06.989 --rc geninfo_unexecuted_blocks=1 00:20:06.989 00:20:06.989 ' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:06.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.989 --rc genhtml_branch_coverage=1 00:20:06.989 --rc genhtml_function_coverage=1 00:20:06.989 --rc genhtml_legend=1 00:20:06.989 --rc geninfo_all_blocks=1 00:20:06.989 --rc geninfo_unexecuted_blocks=1 00:20:06.989 00:20:06.989 ' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.989 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:06.990 16:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.183 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:11.184 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:11.184 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:11.184 Found net devices under 0000:31:00.0: cvl_0_0 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:11.184 Found net devices under 0000:31:00.1: cvl_0_1 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:11.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:20:11.184 00:20:11.184 --- 10.0.0.2 ping statistics --- 00:20:11.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.184 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:20:11.184 00:20:11.184 --- 10.0.0.1 ping statistics --- 00:20:11.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.184 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2228511 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2228511 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2228511 ']' 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.184 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2228538 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=86527bed2973c711265eaf167739dadafae2536f8c01cacf 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.f5v 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 86527bed2973c711265eaf167739dadafae2536f8c01cacf 0 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 86527bed2973c711265eaf167739dadafae2536f8c01cacf 0 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=86527bed2973c711265eaf167739dadafae2536f8c01cacf 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.f5v 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.f5v 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.f5v 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2940e76948a4f43968a4a95440a1c65f12bfac2ad45e107a39c9d887fd1c06dc 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.olj 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2940e76948a4f43968a4a95440a1c65f12bfac2ad45e107a39c9d887fd1c06dc 3 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2940e76948a4f43968a4a95440a1c65f12bfac2ad45e107a39c9d887fd1c06dc 3 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2940e76948a4f43968a4a95440a1c65f12bfac2ad45e107a39c9d887fd1c06dc 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.olj 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.olj 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.olj 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8cddb72a1929d821629b1a8fac108376 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.utG 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8cddb72a1929d821629b1a8fac108376 1 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8cddb72a1929d821629b1a8fac108376 1 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8cddb72a1929d821629b1a8fac108376 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.utG 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.utG 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.utG 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e3c5094c62a058a0e8a804f9c736a3cc6ecc83ec4d2a7599 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VaI 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e3c5094c62a058a0e8a804f9c736a3cc6ecc83ec4d2a7599 2 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e3c5094c62a058a0e8a804f9c736a3cc6ecc83ec4d2a7599 2 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e3c5094c62a058a0e8a804f9c736a3cc6ecc83ec4d2a7599 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VaI 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VaI 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.VaI 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=afcdd206ef0eb2688132caac1044bce60ee83b8f807b8bdb 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:11.185 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.RRD 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key afcdd206ef0eb2688132caac1044bce60ee83b8f807b8bdb 2 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 afcdd206ef0eb2688132caac1044bce60ee83b8f807b8bdb 2 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=afcdd206ef0eb2688132caac1044bce60ee83b8f807b8bdb 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.RRD 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.RRD 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.RRD 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ec88e129948a0072f7f4f43cdbfe6dcf 00:20:11.186 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TII 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ec88e129948a0072f7f4f43cdbfe6dcf 1 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ec88e129948a0072f7f4f43cdbfe6dcf 1 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ec88e129948a0072f7f4f43cdbfe6dcf 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TII 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TII 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.TII 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=715d271b61f5975825224b0c41eff733d6e7140083cfc00784088c5691422ad2 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Vy8 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 715d271b61f5975825224b0c41eff733d6e7140083cfc00784088c5691422ad2 3 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 715d271b61f5975825224b0c41eff733d6e7140083cfc00784088c5691422ad2 3 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=715d271b61f5975825224b0c41eff733d6e7140083cfc00784088c5691422ad2 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Vy8 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Vy8 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Vy8 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2228511 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2228511 ']' 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.446 16:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2228538 /var/tmp/host.sock 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2228538 ']' 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:11.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.446 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.f5v 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.f5v 00:20:11.707 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.f5v 00:20:11.966 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.olj ]] 00:20:11.966 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.olj 00:20:11.966 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.966 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.966 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.966 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.olj 00:20:11.966 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.olj 00:20:11.967 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:11.967 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.utG 00:20:11.967 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.967 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.967 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.967 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.utG 00:20:11.967 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.utG 00:20:12.226 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.VaI ]] 00:20:12.226 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VaI 00:20:12.226 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.226 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.226 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.226 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VaI 00:20:12.226 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VaI 00:20:12.486 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:12.486 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RRD 00:20:12.486 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.486 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.486 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.486 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.RRD 00:20:12.486 16:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.RRD 00:20:12.486 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.TII ]] 00:20:12.486 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TII 00:20:12.486 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.486 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.486 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.486 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TII 00:20:12.486 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TII 00:20:12.746 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:12.746 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy8 00:20:12.746 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.746 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.746 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.746 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy8 00:20:12.746 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy8 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.006 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.266 00:20:13.266 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.266 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.266 16:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.526 { 00:20:13.526 "cntlid": 1, 00:20:13.526 "qid": 0, 00:20:13.526 "state": "enabled", 00:20:13.526 "thread": "nvmf_tgt_poll_group_000", 00:20:13.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:13.526 "listen_address": { 00:20:13.526 "trtype": "TCP", 00:20:13.526 "adrfam": "IPv4", 00:20:13.526 "traddr": "10.0.0.2", 00:20:13.526 "trsvcid": "4420" 00:20:13.526 }, 00:20:13.526 "peer_address": { 00:20:13.526 "trtype": "TCP", 00:20:13.526 "adrfam": "IPv4", 00:20:13.526 "traddr": "10.0.0.1", 00:20:13.526 "trsvcid": "43866" 00:20:13.526 }, 00:20:13.526 "auth": { 00:20:13.526 "state": "completed", 00:20:13.526 "digest": "sha256", 00:20:13.526 "dhgroup": "null" 00:20:13.526 } 00:20:13.526 } 00:20:13.526 ]' 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.526 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.786 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:13.787 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.356 16:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.614 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.614 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.873 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.873 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.873 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.873 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.873 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.873 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.874 { 00:20:14.874 "cntlid": 3, 00:20:14.874 "qid": 0, 00:20:14.874 "state": "enabled", 00:20:14.874 "thread": "nvmf_tgt_poll_group_000", 00:20:14.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:14.874 "listen_address": { 00:20:14.874 "trtype": "TCP", 00:20:14.874 "adrfam": "IPv4", 00:20:14.874 "traddr": "10.0.0.2", 00:20:14.874 "trsvcid": "4420" 00:20:14.874 }, 00:20:14.874 "peer_address": { 00:20:14.874 "trtype": "TCP", 00:20:14.874 "adrfam": "IPv4", 00:20:14.874 "traddr": "10.0.0.1", 00:20:14.874 "trsvcid": "43890" 00:20:14.874 }, 00:20:14.874 "auth": { 00:20:14.874 "state": "completed", 00:20:14.874 "digest": "sha256", 00:20:14.874 "dhgroup": "null" 00:20:14.874 } 00:20:14.874 } 00:20:14.874 ]' 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.874 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.133 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:15.133 16:47:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:15.760 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.020 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.278 00:20:16.278 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.278 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.279 { 00:20:16.279 "cntlid": 5, 00:20:16.279 "qid": 0, 00:20:16.279 "state": "enabled", 00:20:16.279 "thread": "nvmf_tgt_poll_group_000", 00:20:16.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:16.279 "listen_address": { 00:20:16.279 "trtype": "TCP", 00:20:16.279 "adrfam": "IPv4", 00:20:16.279 "traddr": "10.0.0.2", 00:20:16.279 "trsvcid": "4420" 00:20:16.279 }, 00:20:16.279 "peer_address": { 00:20:16.279 "trtype": "TCP", 00:20:16.279 "adrfam": "IPv4", 00:20:16.279 "traddr": "10.0.0.1", 00:20:16.279 "trsvcid": "43922" 00:20:16.279 }, 00:20:16.279 "auth": { 00:20:16.279 "state": "completed", 00:20:16.279 "digest": "sha256", 00:20:16.279 "dhgroup": "null" 00:20:16.279 } 00:20:16.279 } 00:20:16.279 ]' 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.279 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.537 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.537 16:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.537 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.537 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.537 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.537 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:16.537 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.104 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.363 16:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.620 00:20:17.620 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.620 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.620 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.878 { 00:20:17.878 "cntlid": 7, 00:20:17.878 "qid": 0, 00:20:17.878 "state": "enabled", 00:20:17.878 "thread": "nvmf_tgt_poll_group_000", 00:20:17.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:17.878 "listen_address": { 00:20:17.878 "trtype": "TCP", 00:20:17.878 "adrfam": "IPv4", 00:20:17.878 "traddr": "10.0.0.2", 00:20:17.878 "trsvcid": "4420" 00:20:17.878 }, 00:20:17.878 "peer_address": { 00:20:17.878 "trtype": "TCP", 00:20:17.878 "adrfam": "IPv4", 00:20:17.878 "traddr": "10.0.0.1", 00:20:17.878 "trsvcid": "46580" 00:20:17.878 }, 00:20:17.878 "auth": { 00:20:17.878 "state": "completed", 00:20:17.878 "digest": "sha256", 00:20:17.878 "dhgroup": "null" 00:20:17.878 } 00:20:17.878 } 00:20:17.878 ]' 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.878 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.136 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:18.136 16:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:18.395 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.655 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.656 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.916 00:20:18.916 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.916 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.916 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.174 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.174 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.175 { 00:20:19.175 "cntlid": 9, 00:20:19.175 "qid": 0, 00:20:19.175 "state": "enabled", 00:20:19.175 "thread": "nvmf_tgt_poll_group_000", 00:20:19.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:19.175 "listen_address": { 00:20:19.175 "trtype": "TCP", 00:20:19.175 "adrfam": "IPv4", 00:20:19.175 "traddr": "10.0.0.2", 00:20:19.175 "trsvcid": "4420" 00:20:19.175 }, 00:20:19.175 "peer_address": { 00:20:19.175 "trtype": "TCP", 00:20:19.175 "adrfam": "IPv4", 00:20:19.175 "traddr": "10.0.0.1", 00:20:19.175 "trsvcid": "46612" 00:20:19.175 }, 00:20:19.175 "auth": { 00:20:19.175 "state": "completed", 00:20:19.175 "digest": "sha256", 00:20:19.175 "dhgroup": "ffdhe2048" 00:20:19.175 } 00:20:19.175 } 00:20:19.175 ]' 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.175 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.434 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:19.434 16:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.003 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.004 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.004 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.004 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.004 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.004 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.004 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.264 00:20:20.264 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.264 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.264 16:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.524 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.524 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.524 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.524 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.524 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.524 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.524 { 00:20:20.524 "cntlid": 11, 00:20:20.524 "qid": 0, 00:20:20.524 "state": "enabled", 00:20:20.524 "thread": "nvmf_tgt_poll_group_000", 00:20:20.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:20.524 "listen_address": { 00:20:20.524 "trtype": "TCP", 00:20:20.524 "adrfam": "IPv4", 00:20:20.524 "traddr": "10.0.0.2", 00:20:20.524 "trsvcid": "4420" 00:20:20.524 }, 00:20:20.524 "peer_address": { 00:20:20.524 "trtype": "TCP", 00:20:20.524 "adrfam": "IPv4", 00:20:20.525 "traddr": "10.0.0.1", 00:20:20.525 "trsvcid": "46638" 00:20:20.525 }, 00:20:20.525 "auth": { 00:20:20.525 "state": "completed", 00:20:20.525 "digest": "sha256", 00:20:20.525 "dhgroup": "ffdhe2048" 00:20:20.525 } 00:20:20.525 } 00:20:20.525 ]' 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.525 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.784 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:20.784 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.355 16:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.355 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.615 00:20:21.615 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.615 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.615 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.876 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.876 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.876 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.876 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.876 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.876 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.876 { 00:20:21.876 "cntlid": 13, 00:20:21.876 "qid": 0, 00:20:21.876 "state": "enabled", 00:20:21.876 "thread": "nvmf_tgt_poll_group_000", 00:20:21.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:21.876 "listen_address": { 00:20:21.876 "trtype": "TCP", 00:20:21.876 "adrfam": "IPv4", 00:20:21.876 "traddr": "10.0.0.2", 00:20:21.876 "trsvcid": "4420" 00:20:21.876 }, 00:20:21.876 "peer_address": { 00:20:21.876 "trtype": "TCP", 00:20:21.876 "adrfam": "IPv4", 00:20:21.876 "traddr": "10.0.0.1", 00:20:21.876 "trsvcid": "46674" 00:20:21.876 }, 00:20:21.876 "auth": { 00:20:21.876 "state": "completed", 00:20:21.877 "digest": "sha256", 00:20:21.877 "dhgroup": "ffdhe2048" 00:20:21.877 } 00:20:21.877 } 00:20:21.877 ]' 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.877 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.137 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:22.137 16:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.707 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.969 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.229 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.229 { 00:20:23.229 "cntlid": 15, 00:20:23.229 "qid": 0, 00:20:23.229 "state": "enabled", 00:20:23.229 "thread": "nvmf_tgt_poll_group_000", 00:20:23.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:23.229 "listen_address": { 00:20:23.229 "trtype": "TCP", 00:20:23.229 "adrfam": "IPv4", 00:20:23.229 "traddr": "10.0.0.2", 00:20:23.229 "trsvcid": "4420" 00:20:23.229 }, 00:20:23.229 "peer_address": { 00:20:23.229 "trtype": "TCP", 00:20:23.229 "adrfam": "IPv4", 00:20:23.229 "traddr": "10.0.0.1", 00:20:23.229 "trsvcid": "46698" 00:20:23.229 }, 00:20:23.229 "auth": { 00:20:23.229 "state": "completed", 00:20:23.229 "digest": "sha256", 00:20:23.229 "dhgroup": "ffdhe2048" 00:20:23.229 } 00:20:23.229 } 00:20:23.229 ]' 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.229 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.489 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.489 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.489 16:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.489 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:23.489 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.059 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.320 16:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.579 00:20:24.579 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.579 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.579 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.579 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.579 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.579 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.579 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.838 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.838 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.838 { 00:20:24.838 "cntlid": 17, 00:20:24.838 "qid": 0, 00:20:24.838 "state": "enabled", 00:20:24.838 "thread": "nvmf_tgt_poll_group_000", 00:20:24.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:24.838 "listen_address": { 00:20:24.838 "trtype": "TCP", 00:20:24.838 "adrfam": "IPv4", 00:20:24.838 "traddr": "10.0.0.2", 00:20:24.838 "trsvcid": "4420" 00:20:24.838 }, 00:20:24.838 "peer_address": { 00:20:24.838 "trtype": "TCP", 00:20:24.838 "adrfam": "IPv4", 00:20:24.838 "traddr": "10.0.0.1", 00:20:24.838 "trsvcid": "46720" 00:20:24.838 }, 00:20:24.838 "auth": { 00:20:24.838 "state": "completed", 00:20:24.838 "digest": "sha256", 00:20:24.838 "dhgroup": "ffdhe3072" 00:20:24.838 } 00:20:24.838 } 00:20:24.838 ]' 00:20:24.838 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.838 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.838 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.838 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.838 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.839 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.839 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.839 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.839 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:24.839 16:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.777 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.054 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.054 { 00:20:26.054 "cntlid": 19, 00:20:26.054 "qid": 0, 00:20:26.054 "state": "enabled", 00:20:26.054 "thread": "nvmf_tgt_poll_group_000", 00:20:26.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:26.054 "listen_address": { 00:20:26.054 "trtype": "TCP", 00:20:26.054 "adrfam": "IPv4", 00:20:26.054 "traddr": "10.0.0.2", 00:20:26.054 "trsvcid": "4420" 00:20:26.054 }, 00:20:26.054 "peer_address": { 00:20:26.054 "trtype": "TCP", 00:20:26.054 "adrfam": "IPv4", 00:20:26.054 "traddr": "10.0.0.1", 00:20:26.054 "trsvcid": "46744" 00:20:26.054 }, 00:20:26.054 "auth": { 00:20:26.054 "state": "completed", 00:20:26.054 "digest": "sha256", 00:20:26.054 "dhgroup": "ffdhe3072" 00:20:26.054 } 00:20:26.054 } 00:20:26.054 ]' 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.054 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.314 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.314 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.314 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.314 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.314 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.314 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:26.314 16:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:26.884 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.142 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.143 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.401 00:20:27.401 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.401 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.401 16:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.660 { 00:20:27.660 "cntlid": 21, 00:20:27.660 "qid": 0, 00:20:27.660 "state": "enabled", 00:20:27.660 "thread": "nvmf_tgt_poll_group_000", 00:20:27.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:27.660 "listen_address": { 00:20:27.660 "trtype": "TCP", 00:20:27.660 "adrfam": "IPv4", 00:20:27.660 "traddr": "10.0.0.2", 00:20:27.660 "trsvcid": "4420" 00:20:27.660 }, 00:20:27.660 "peer_address": { 00:20:27.660 "trtype": "TCP", 00:20:27.660 "adrfam": "IPv4", 00:20:27.660 "traddr": "10.0.0.1", 00:20:27.660 "trsvcid": "41102" 00:20:27.660 }, 00:20:27.660 "auth": { 00:20:27.660 "state": "completed", 00:20:27.660 "digest": "sha256", 00:20:27.660 "dhgroup": "ffdhe3072" 00:20:27.660 } 00:20:27.660 } 00:20:27.660 ]' 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.660 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.919 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:27.919 16:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:28.486 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.487 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:28.487 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.487 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.487 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.487 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.487 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.487 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.747 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.006 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.006 { 00:20:29.006 "cntlid": 23, 00:20:29.006 "qid": 0, 00:20:29.006 "state": "enabled", 00:20:29.006 "thread": "nvmf_tgt_poll_group_000", 00:20:29.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:29.006 "listen_address": { 00:20:29.006 "trtype": "TCP", 00:20:29.006 "adrfam": "IPv4", 00:20:29.006 "traddr": "10.0.0.2", 00:20:29.006 "trsvcid": "4420" 00:20:29.006 }, 00:20:29.006 "peer_address": { 00:20:29.006 "trtype": "TCP", 00:20:29.006 "adrfam": "IPv4", 00:20:29.006 "traddr": "10.0.0.1", 00:20:29.006 "trsvcid": "41122" 00:20:29.006 }, 00:20:29.006 "auth": { 00:20:29.006 "state": "completed", 00:20:29.006 "digest": "sha256", 00:20:29.006 "dhgroup": "ffdhe3072" 00:20:29.006 } 00:20:29.006 } 00:20:29.006 ]' 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.006 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.007 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.265 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.265 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.265 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.265 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:29.265 16:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.835 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.095 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.353 00:20:30.353 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.353 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.353 16:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.612 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.612 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.612 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.612 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.612 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.612 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.612 { 00:20:30.612 "cntlid": 25, 00:20:30.612 "qid": 0, 00:20:30.612 "state": "enabled", 00:20:30.612 "thread": "nvmf_tgt_poll_group_000", 00:20:30.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:30.612 "listen_address": { 00:20:30.612 "trtype": "TCP", 00:20:30.612 "adrfam": "IPv4", 00:20:30.612 "traddr": "10.0.0.2", 00:20:30.612 "trsvcid": "4420" 00:20:30.612 }, 00:20:30.612 "peer_address": { 00:20:30.612 "trtype": "TCP", 00:20:30.613 "adrfam": "IPv4", 00:20:30.613 "traddr": "10.0.0.1", 00:20:30.613 "trsvcid": "41160" 00:20:30.613 }, 00:20:30.613 "auth": { 00:20:30.613 "state": "completed", 00:20:30.613 "digest": "sha256", 00:20:30.613 "dhgroup": "ffdhe4096" 00:20:30.613 } 00:20:30.613 } 00:20:30.613 ]' 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.613 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.870 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:30.870 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.435 16:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.435 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.694 00:20:31.694 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.694 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.694 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.952 { 00:20:31.952 "cntlid": 27, 00:20:31.952 "qid": 0, 00:20:31.952 "state": "enabled", 00:20:31.952 "thread": "nvmf_tgt_poll_group_000", 00:20:31.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:31.952 "listen_address": { 00:20:31.952 "trtype": "TCP", 00:20:31.952 "adrfam": "IPv4", 00:20:31.952 "traddr": "10.0.0.2", 00:20:31.952 "trsvcid": "4420" 00:20:31.952 }, 00:20:31.952 "peer_address": { 00:20:31.952 "trtype": "TCP", 00:20:31.952 "adrfam": "IPv4", 00:20:31.952 "traddr": "10.0.0.1", 00:20:31.952 "trsvcid": "41186" 00:20:31.952 }, 00:20:31.952 "auth": { 00:20:31.952 "state": "completed", 00:20:31.952 "digest": "sha256", 00:20:31.952 "dhgroup": "ffdhe4096" 00:20:31.952 } 00:20:31.952 } 00:20:31.952 ]' 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.952 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.211 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:32.211 16:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.780 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.039 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.297 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.298 { 00:20:33.298 "cntlid": 29, 00:20:33.298 "qid": 0, 00:20:33.298 "state": "enabled", 00:20:33.298 "thread": "nvmf_tgt_poll_group_000", 00:20:33.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:33.298 "listen_address": { 00:20:33.298 "trtype": "TCP", 00:20:33.298 "adrfam": "IPv4", 00:20:33.298 "traddr": "10.0.0.2", 00:20:33.298 "trsvcid": "4420" 00:20:33.298 }, 00:20:33.298 "peer_address": { 00:20:33.298 "trtype": "TCP", 00:20:33.298 "adrfam": "IPv4", 00:20:33.298 "traddr": "10.0.0.1", 00:20:33.298 "trsvcid": "41224" 00:20:33.298 }, 00:20:33.298 "auth": { 00:20:33.298 "state": "completed", 00:20:33.298 "digest": "sha256", 00:20:33.298 "dhgroup": "ffdhe4096" 00:20:33.298 } 00:20:33.298 } 00:20:33.298 ]' 00:20:33.298 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.557 16:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:33.557 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:34.148 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.408 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:34.408 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.408 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.408 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.408 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.408 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.408 16:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.408 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:34.408 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.408 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.409 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:34.667 00:20:34.667 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.668 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.668 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.927 { 00:20:34.927 "cntlid": 31, 00:20:34.927 "qid": 0, 00:20:34.927 "state": "enabled", 00:20:34.927 "thread": "nvmf_tgt_poll_group_000", 00:20:34.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:34.927 "listen_address": { 00:20:34.927 "trtype": "TCP", 00:20:34.927 "adrfam": "IPv4", 00:20:34.927 "traddr": "10.0.0.2", 00:20:34.927 "trsvcid": "4420" 00:20:34.927 }, 00:20:34.927 "peer_address": { 00:20:34.927 "trtype": "TCP", 00:20:34.927 "adrfam": "IPv4", 00:20:34.927 "traddr": "10.0.0.1", 00:20:34.927 "trsvcid": "41264" 00:20:34.927 }, 00:20:34.927 "auth": { 00:20:34.927 "state": "completed", 00:20:34.927 "digest": "sha256", 00:20:34.927 "dhgroup": "ffdhe4096" 00:20:34.927 } 00:20:34.927 } 00:20:34.927 ]' 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.927 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.186 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:35.186 16:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:35.754 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.013 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.271 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.271 { 00:20:36.271 "cntlid": 33, 00:20:36.271 "qid": 0, 00:20:36.271 "state": "enabled", 00:20:36.271 "thread": "nvmf_tgt_poll_group_000", 00:20:36.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:36.271 "listen_address": { 00:20:36.271 "trtype": "TCP", 00:20:36.271 "adrfam": "IPv4", 00:20:36.271 "traddr": "10.0.0.2", 00:20:36.271 "trsvcid": "4420" 00:20:36.271 }, 00:20:36.271 "peer_address": { 00:20:36.271 "trtype": "TCP", 00:20:36.271 "adrfam": "IPv4", 00:20:36.271 "traddr": "10.0.0.1", 00:20:36.271 "trsvcid": "41304" 00:20:36.271 }, 00:20:36.271 "auth": { 00:20:36.271 "state": "completed", 00:20:36.271 "digest": "sha256", 00:20:36.271 "dhgroup": "ffdhe6144" 00:20:36.271 } 00:20:36.271 } 00:20:36.271 ]' 00:20:36.271 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.530 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.530 16:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.530 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:36.530 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.530 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.530 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.530 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.530 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:36.530 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:37.096 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.355 16:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.613 00:20:37.613 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.613 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.613 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.872 { 00:20:37.872 "cntlid": 35, 00:20:37.872 "qid": 0, 00:20:37.872 "state": "enabled", 00:20:37.872 "thread": "nvmf_tgt_poll_group_000", 00:20:37.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:37.872 "listen_address": { 00:20:37.872 "trtype": "TCP", 00:20:37.872 "adrfam": "IPv4", 00:20:37.872 "traddr": "10.0.0.2", 00:20:37.872 "trsvcid": "4420" 00:20:37.872 }, 00:20:37.872 "peer_address": { 00:20:37.872 "trtype": "TCP", 00:20:37.872 "adrfam": "IPv4", 00:20:37.872 "traddr": "10.0.0.1", 00:20:37.872 "trsvcid": "41964" 00:20:37.872 }, 00:20:37.872 "auth": { 00:20:37.872 "state": "completed", 00:20:37.872 "digest": "sha256", 00:20:37.872 "dhgroup": "ffdhe6144" 00:20:37.872 } 00:20:37.872 } 00:20:37.872 ]' 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.872 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.131 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:38.132 16:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:38.699 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.699 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:38.699 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.699 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.699 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.699 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.700 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.700 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.961 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.221 00:20:39.221 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.221 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.221 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.481 { 00:20:39.481 "cntlid": 37, 00:20:39.481 "qid": 0, 00:20:39.481 "state": "enabled", 00:20:39.481 "thread": "nvmf_tgt_poll_group_000", 00:20:39.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:39.481 "listen_address": { 00:20:39.481 "trtype": "TCP", 00:20:39.481 "adrfam": "IPv4", 00:20:39.481 "traddr": "10.0.0.2", 00:20:39.481 "trsvcid": "4420" 00:20:39.481 }, 00:20:39.481 "peer_address": { 00:20:39.481 "trtype": "TCP", 00:20:39.481 "adrfam": "IPv4", 00:20:39.481 "traddr": "10.0.0.1", 00:20:39.481 "trsvcid": "41994" 00:20:39.481 }, 00:20:39.481 "auth": { 00:20:39.481 "state": "completed", 00:20:39.481 "digest": "sha256", 00:20:39.481 "dhgroup": "ffdhe6144" 00:20:39.481 } 00:20:39.481 } 00:20:39.481 ]' 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.481 16:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.481 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.481 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.481 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.481 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.481 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.741 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:39.741 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:40.311 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.312 16:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.312 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.572 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.572 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.572 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.572 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.832 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.832 { 00:20:40.832 "cntlid": 39, 00:20:40.832 "qid": 0, 00:20:40.832 "state": "enabled", 00:20:40.832 "thread": "nvmf_tgt_poll_group_000", 00:20:40.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:40.832 "listen_address": { 00:20:40.832 "trtype": "TCP", 00:20:40.832 "adrfam": "IPv4", 00:20:40.832 "traddr": "10.0.0.2", 00:20:40.832 "trsvcid": "4420" 00:20:40.832 }, 00:20:40.832 "peer_address": { 00:20:40.832 "trtype": "TCP", 00:20:40.832 "adrfam": "IPv4", 00:20:40.832 "traddr": "10.0.0.1", 00:20:40.832 "trsvcid": "42032" 00:20:40.832 }, 00:20:40.832 "auth": { 00:20:40.832 "state": "completed", 00:20:40.832 "digest": "sha256", 00:20:40.832 "dhgroup": "ffdhe6144" 00:20:40.832 } 00:20:40.832 } 00:20:40.832 ]' 00:20:40.832 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:41.092 16:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:41.745 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.079 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.340 00:20:42.340 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.340 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.340 16:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.599 { 00:20:42.599 "cntlid": 41, 00:20:42.599 "qid": 0, 00:20:42.599 "state": "enabled", 00:20:42.599 "thread": "nvmf_tgt_poll_group_000", 00:20:42.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:42.599 "listen_address": { 00:20:42.599 "trtype": "TCP", 00:20:42.599 "adrfam": "IPv4", 00:20:42.599 "traddr": "10.0.0.2", 00:20:42.599 "trsvcid": "4420" 00:20:42.599 }, 00:20:42.599 "peer_address": { 00:20:42.599 "trtype": "TCP", 00:20:42.599 "adrfam": "IPv4", 00:20:42.599 "traddr": "10.0.0.1", 00:20:42.599 "trsvcid": "42048" 00:20:42.599 }, 00:20:42.599 "auth": { 00:20:42.599 "state": "completed", 00:20:42.599 "digest": "sha256", 00:20:42.599 "dhgroup": "ffdhe8192" 00:20:42.599 } 00:20:42.599 } 00:20:42.599 ]' 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.599 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.600 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.858 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:42.858 16:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.426 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.685 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.686 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.257 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.257 { 00:20:44.257 "cntlid": 43, 00:20:44.257 "qid": 0, 00:20:44.257 "state": "enabled", 00:20:44.257 "thread": "nvmf_tgt_poll_group_000", 00:20:44.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:44.257 "listen_address": { 00:20:44.257 "trtype": "TCP", 00:20:44.257 "adrfam": "IPv4", 00:20:44.257 "traddr": "10.0.0.2", 00:20:44.257 "trsvcid": "4420" 00:20:44.257 }, 00:20:44.257 "peer_address": { 00:20:44.257 "trtype": "TCP", 00:20:44.257 "adrfam": "IPv4", 00:20:44.257 "traddr": "10.0.0.1", 00:20:44.257 "trsvcid": "42082" 00:20:44.257 }, 00:20:44.257 "auth": { 00:20:44.257 "state": "completed", 00:20:44.257 "digest": "sha256", 00:20:44.257 "dhgroup": "ffdhe8192" 00:20:44.257 } 00:20:44.257 } 00:20:44.257 ]' 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.257 16:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.517 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:44.517 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.088 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.348 16:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.607 00:20:45.607 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.607 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.607 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.867 { 00:20:45.867 "cntlid": 45, 00:20:45.867 "qid": 0, 00:20:45.867 "state": "enabled", 00:20:45.867 "thread": "nvmf_tgt_poll_group_000", 00:20:45.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:45.867 "listen_address": { 00:20:45.867 "trtype": "TCP", 00:20:45.867 "adrfam": "IPv4", 00:20:45.867 "traddr": "10.0.0.2", 00:20:45.867 "trsvcid": "4420" 00:20:45.867 }, 00:20:45.867 "peer_address": { 00:20:45.867 "trtype": "TCP", 00:20:45.867 "adrfam": "IPv4", 00:20:45.867 "traddr": "10.0.0.1", 00:20:45.867 "trsvcid": "42116" 00:20:45.867 }, 00:20:45.867 "auth": { 00:20:45.867 "state": "completed", 00:20:45.867 "digest": "sha256", 00:20:45.867 "dhgroup": "ffdhe8192" 00:20:45.867 } 00:20:45.867 } 00:20:45.867 ]' 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.867 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.126 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:46.126 16:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.692 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:46.951 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:47.519 00:20:47.519 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.519 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.519 16:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.519 { 00:20:47.519 "cntlid": 47, 00:20:47.519 "qid": 0, 00:20:47.519 "state": "enabled", 00:20:47.519 "thread": "nvmf_tgt_poll_group_000", 00:20:47.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:47.519 "listen_address": { 00:20:47.519 "trtype": "TCP", 00:20:47.519 "adrfam": "IPv4", 00:20:47.519 "traddr": "10.0.0.2", 00:20:47.519 "trsvcid": "4420" 00:20:47.519 }, 00:20:47.519 "peer_address": { 00:20:47.519 "trtype": "TCP", 00:20:47.519 "adrfam": "IPv4", 00:20:47.519 "traddr": "10.0.0.1", 00:20:47.519 "trsvcid": "42132" 00:20:47.519 }, 00:20:47.519 "auth": { 00:20:47.519 "state": "completed", 00:20:47.519 "digest": "sha256", 00:20:47.519 "dhgroup": "ffdhe8192" 00:20:47.519 } 00:20:47.519 } 00:20:47.519 ]' 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.519 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.778 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:47.778 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.347 16:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.607 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.868 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:48.868 { 00:20:48.868 "cntlid": 49, 00:20:48.868 "qid": 0, 00:20:48.868 "state": "enabled", 00:20:48.868 "thread": "nvmf_tgt_poll_group_000", 00:20:48.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:48.868 "listen_address": { 00:20:48.868 "trtype": "TCP", 00:20:48.868 "adrfam": "IPv4", 00:20:48.868 "traddr": "10.0.0.2", 00:20:48.868 "trsvcid": "4420" 00:20:48.868 }, 00:20:48.868 "peer_address": { 00:20:48.868 "trtype": "TCP", 00:20:48.868 "adrfam": "IPv4", 00:20:48.868 "traddr": "10.0.0.1", 00:20:48.868 "trsvcid": "50100" 00:20:48.868 }, 00:20:48.868 "auth": { 00:20:48.868 "state": "completed", 00:20:48.868 "digest": "sha384", 00:20:48.868 "dhgroup": "null" 00:20:48.868 } 00:20:48.868 } 00:20:48.868 ]' 00:20:48.868 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.127 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.127 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.127 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:49.127 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.127 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.127 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.127 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.128 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:49.128 16:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:49.695 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.954 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:49.954 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.954 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.954 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.954 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.954 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.955 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.213 00:20:50.214 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.214 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.214 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.473 { 00:20:50.473 "cntlid": 51, 00:20:50.473 "qid": 0, 00:20:50.473 "state": "enabled", 00:20:50.473 "thread": "nvmf_tgt_poll_group_000", 00:20:50.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:50.473 "listen_address": { 00:20:50.473 "trtype": "TCP", 00:20:50.473 "adrfam": "IPv4", 00:20:50.473 "traddr": "10.0.0.2", 00:20:50.473 "trsvcid": "4420" 00:20:50.473 }, 00:20:50.473 "peer_address": { 00:20:50.473 "trtype": "TCP", 00:20:50.473 "adrfam": "IPv4", 00:20:50.473 "traddr": "10.0.0.1", 00:20:50.473 "trsvcid": "50118" 00:20:50.473 }, 00:20:50.473 "auth": { 00:20:50.473 "state": "completed", 00:20:50.473 "digest": "sha384", 00:20:50.473 "dhgroup": "null" 00:20:50.473 } 00:20:50.473 } 00:20:50.473 ]' 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.473 16:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.473 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:50.473 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.473 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.473 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.473 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.732 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:50.732 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:51.300 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.300 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:51.300 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.300 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.300 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.300 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.300 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.301 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.561 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.561 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.561 16:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.561 00:20:51.561 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.561 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.561 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.820 { 00:20:51.820 "cntlid": 53, 00:20:51.820 "qid": 0, 00:20:51.820 "state": "enabled", 00:20:51.820 "thread": "nvmf_tgt_poll_group_000", 00:20:51.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:51.820 "listen_address": { 00:20:51.820 "trtype": "TCP", 00:20:51.820 "adrfam": "IPv4", 00:20:51.820 "traddr": "10.0.0.2", 00:20:51.820 "trsvcid": "4420" 00:20:51.820 }, 00:20:51.820 "peer_address": { 00:20:51.820 "trtype": "TCP", 00:20:51.820 "adrfam": "IPv4", 00:20:51.820 "traddr": "10.0.0.1", 00:20:51.820 "trsvcid": "50144" 00:20:51.820 }, 00:20:51.820 "auth": { 00:20:51.820 "state": "completed", 00:20:51.820 "digest": "sha384", 00:20:51.820 "dhgroup": "null" 00:20:51.820 } 00:20:51.820 } 00:20:51.820 ]' 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.820 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.079 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:52.079 16:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:52.648 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.907 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.166 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.166 { 00:20:53.166 "cntlid": 55, 00:20:53.166 "qid": 0, 00:20:53.166 "state": "enabled", 00:20:53.166 "thread": "nvmf_tgt_poll_group_000", 00:20:53.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:53.166 "listen_address": { 00:20:53.166 "trtype": "TCP", 00:20:53.166 "adrfam": "IPv4", 00:20:53.166 "traddr": "10.0.0.2", 00:20:53.166 "trsvcid": "4420" 00:20:53.166 }, 00:20:53.166 "peer_address": { 00:20:53.166 "trtype": "TCP", 00:20:53.166 "adrfam": "IPv4", 00:20:53.166 "traddr": "10.0.0.1", 00:20:53.166 "trsvcid": "50168" 00:20:53.166 }, 00:20:53.166 "auth": { 00:20:53.166 "state": "completed", 00:20:53.166 "digest": "sha384", 00:20:53.166 "dhgroup": "null" 00:20:53.166 } 00:20:53.166 } 00:20:53.166 ]' 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.166 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:53.426 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:53.426 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:53.426 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.426 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.426 16:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.426 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:53.426 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:53.994 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.253 16:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.513 00:20:54.513 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.513 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.513 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.772 { 00:20:54.772 "cntlid": 57, 00:20:54.772 "qid": 0, 00:20:54.772 "state": "enabled", 00:20:54.772 "thread": "nvmf_tgt_poll_group_000", 00:20:54.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:54.772 "listen_address": { 00:20:54.772 "trtype": "TCP", 00:20:54.772 "adrfam": "IPv4", 00:20:54.772 "traddr": "10.0.0.2", 00:20:54.772 "trsvcid": "4420" 00:20:54.772 }, 00:20:54.772 "peer_address": { 00:20:54.772 "trtype": "TCP", 00:20:54.772 "adrfam": "IPv4", 00:20:54.772 "traddr": "10.0.0.1", 00:20:54.772 "trsvcid": "50192" 00:20:54.772 }, 00:20:54.772 "auth": { 00:20:54.772 "state": "completed", 00:20:54.772 "digest": "sha384", 00:20:54.772 "dhgroup": "ffdhe2048" 00:20:54.772 } 00:20:54.772 } 00:20:54.772 ]' 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.772 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.032 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:55.032 16:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.603 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.863 00:20:55.863 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.863 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.863 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.123 { 00:20:56.123 "cntlid": 59, 00:20:56.123 "qid": 0, 00:20:56.123 "state": "enabled", 00:20:56.123 "thread": "nvmf_tgt_poll_group_000", 00:20:56.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:56.123 "listen_address": { 00:20:56.123 "trtype": "TCP", 00:20:56.123 "adrfam": "IPv4", 00:20:56.123 "traddr": "10.0.0.2", 00:20:56.123 "trsvcid": "4420" 00:20:56.123 }, 00:20:56.123 "peer_address": { 00:20:56.123 "trtype": "TCP", 00:20:56.123 "adrfam": "IPv4", 00:20:56.123 "traddr": "10.0.0.1", 00:20:56.123 "trsvcid": "50224" 00:20:56.123 }, 00:20:56.123 "auth": { 00:20:56.123 "state": "completed", 00:20:56.123 "digest": "sha384", 00:20:56.123 "dhgroup": "ffdhe2048" 00:20:56.123 } 00:20:56.123 } 00:20:56.123 ]' 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.123 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.382 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:56.382 16:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.950 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.208 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:57.208 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.208 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.208 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.209 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.209 00:20:57.466 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.466 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.466 16:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.466 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.466 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.466 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.466 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.467 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.467 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.467 { 00:20:57.467 "cntlid": 61, 00:20:57.467 "qid": 0, 00:20:57.467 "state": "enabled", 00:20:57.467 "thread": "nvmf_tgt_poll_group_000", 00:20:57.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:57.467 "listen_address": { 00:20:57.467 "trtype": "TCP", 00:20:57.467 "adrfam": "IPv4", 00:20:57.467 "traddr": "10.0.0.2", 00:20:57.467 "trsvcid": "4420" 00:20:57.467 }, 00:20:57.467 "peer_address": { 00:20:57.467 "trtype": "TCP", 00:20:57.467 "adrfam": "IPv4", 00:20:57.467 "traddr": "10.0.0.1", 00:20:57.467 "trsvcid": "49794" 00:20:57.467 }, 00:20:57.467 "auth": { 00:20:57.467 "state": "completed", 00:20:57.467 "digest": "sha384", 00:20:57.467 "dhgroup": "ffdhe2048" 00:20:57.467 } 00:20:57.467 } 00:20:57.467 ]' 00:20:57.467 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.467 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.467 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.467 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.467 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.725 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.725 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.725 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.725 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:57.725 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.293 16:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.553 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.813 00:20:58.813 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.813 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.813 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.072 { 00:20:59.072 "cntlid": 63, 00:20:59.072 "qid": 0, 00:20:59.072 "state": "enabled", 00:20:59.072 "thread": "nvmf_tgt_poll_group_000", 00:20:59.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:20:59.072 "listen_address": { 00:20:59.072 "trtype": "TCP", 00:20:59.072 "adrfam": "IPv4", 00:20:59.072 "traddr": "10.0.0.2", 00:20:59.072 "trsvcid": "4420" 00:20:59.072 }, 00:20:59.072 "peer_address": { 00:20:59.072 "trtype": "TCP", 00:20:59.072 "adrfam": "IPv4", 00:20:59.072 "traddr": "10.0.0.1", 00:20:59.072 "trsvcid": "49816" 00:20:59.072 }, 00:20:59.072 "auth": { 00:20:59.072 "state": "completed", 00:20:59.072 "digest": "sha384", 00:20:59.072 "dhgroup": "ffdhe2048" 00:20:59.072 } 00:20:59.072 } 00:20:59.072 ]' 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.072 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.073 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.332 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:59.332 16:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:20:59.591 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.850 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.108 00:21:00.108 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.108 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.108 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.367 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.367 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.367 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.367 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.367 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.367 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.367 { 00:21:00.367 "cntlid": 65, 00:21:00.367 "qid": 0, 00:21:00.367 "state": "enabled", 00:21:00.367 "thread": "nvmf_tgt_poll_group_000", 00:21:00.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:00.367 "listen_address": { 00:21:00.367 "trtype": "TCP", 00:21:00.367 "adrfam": "IPv4", 00:21:00.367 "traddr": "10.0.0.2", 00:21:00.367 "trsvcid": "4420" 00:21:00.368 }, 00:21:00.368 "peer_address": { 00:21:00.368 "trtype": "TCP", 00:21:00.368 "adrfam": "IPv4", 00:21:00.368 "traddr": "10.0.0.1", 00:21:00.368 "trsvcid": "49838" 00:21:00.368 }, 00:21:00.368 "auth": { 00:21:00.368 "state": "completed", 00:21:00.368 "digest": "sha384", 00:21:00.368 "dhgroup": "ffdhe3072" 00:21:00.368 } 00:21:00.368 } 00:21:00.368 ]' 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.368 16:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.627 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:00.627 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.199 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.459 16:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.459 00:21:01.459 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.459 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.459 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.719 { 00:21:01.719 "cntlid": 67, 00:21:01.719 "qid": 0, 00:21:01.719 "state": "enabled", 00:21:01.719 "thread": "nvmf_tgt_poll_group_000", 00:21:01.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:01.719 "listen_address": { 00:21:01.719 "trtype": "TCP", 00:21:01.719 "adrfam": "IPv4", 00:21:01.719 "traddr": "10.0.0.2", 00:21:01.719 "trsvcid": "4420" 00:21:01.719 }, 00:21:01.719 "peer_address": { 00:21:01.719 "trtype": "TCP", 00:21:01.719 "adrfam": "IPv4", 00:21:01.719 "traddr": "10.0.0.1", 00:21:01.719 "trsvcid": "49868" 00:21:01.719 }, 00:21:01.719 "auth": { 00:21:01.719 "state": "completed", 00:21:01.719 "digest": "sha384", 00:21:01.719 "dhgroup": "ffdhe3072" 00:21:01.719 } 00:21:01.719 } 00:21:01.719 ]' 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.719 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.979 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:01.979 16:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.550 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.810 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.069 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.069 { 00:21:03.069 "cntlid": 69, 00:21:03.069 "qid": 0, 00:21:03.069 "state": "enabled", 00:21:03.069 "thread": "nvmf_tgt_poll_group_000", 00:21:03.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:03.069 "listen_address": { 00:21:03.069 "trtype": "TCP", 00:21:03.069 "adrfam": "IPv4", 00:21:03.069 "traddr": "10.0.0.2", 00:21:03.069 "trsvcid": "4420" 00:21:03.069 }, 00:21:03.069 "peer_address": { 00:21:03.069 "trtype": "TCP", 00:21:03.069 "adrfam": "IPv4", 00:21:03.069 "traddr": "10.0.0.1", 00:21:03.069 "trsvcid": "49894" 00:21:03.069 }, 00:21:03.069 "auth": { 00:21:03.069 "state": "completed", 00:21:03.069 "digest": "sha384", 00:21:03.069 "dhgroup": "ffdhe3072" 00:21:03.069 } 00:21:03.069 } 00:21:03.069 ]' 00:21:03.069 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.328 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.328 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.329 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.329 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.329 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.329 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.329 16:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.329 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:03.329 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.294 16:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.555 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.555 { 00:21:04.555 "cntlid": 71, 00:21:04.555 "qid": 0, 00:21:04.555 "state": "enabled", 00:21:04.555 "thread": "nvmf_tgt_poll_group_000", 00:21:04.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:04.555 "listen_address": { 00:21:04.555 "trtype": "TCP", 00:21:04.555 "adrfam": "IPv4", 00:21:04.555 "traddr": "10.0.0.2", 00:21:04.555 "trsvcid": "4420" 00:21:04.555 }, 00:21:04.555 "peer_address": { 00:21:04.555 "trtype": "TCP", 00:21:04.555 "adrfam": "IPv4", 00:21:04.555 "traddr": "10.0.0.1", 00:21:04.555 "trsvcid": "49914" 00:21:04.555 }, 00:21:04.555 "auth": { 00:21:04.555 "state": "completed", 00:21:04.555 "digest": "sha384", 00:21:04.555 "dhgroup": "ffdhe3072" 00:21:04.555 } 00:21:04.555 } 00:21:04.555 ]' 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.555 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.815 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.815 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.815 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.815 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.815 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.815 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:04.815 16:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.386 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.645 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.905 00:21:05.905 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.905 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.905 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.165 { 00:21:06.165 "cntlid": 73, 00:21:06.165 "qid": 0, 00:21:06.165 "state": "enabled", 00:21:06.165 "thread": "nvmf_tgt_poll_group_000", 00:21:06.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:06.165 "listen_address": { 00:21:06.165 "trtype": "TCP", 00:21:06.165 "adrfam": "IPv4", 00:21:06.165 "traddr": "10.0.0.2", 00:21:06.165 "trsvcid": "4420" 00:21:06.165 }, 00:21:06.165 "peer_address": { 00:21:06.165 "trtype": "TCP", 00:21:06.165 "adrfam": "IPv4", 00:21:06.165 "traddr": "10.0.0.1", 00:21:06.165 "trsvcid": "49938" 00:21:06.165 }, 00:21:06.165 "auth": { 00:21:06.165 "state": "completed", 00:21:06.165 "digest": "sha384", 00:21:06.165 "dhgroup": "ffdhe4096" 00:21:06.165 } 00:21:06.165 } 00:21:06.165 ]' 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.165 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.426 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:06.426 16:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.995 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.254 00:21:07.254 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.254 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.254 16:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.514 { 00:21:07.514 "cntlid": 75, 00:21:07.514 "qid": 0, 00:21:07.514 "state": "enabled", 00:21:07.514 "thread": "nvmf_tgt_poll_group_000", 00:21:07.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:07.514 "listen_address": { 00:21:07.514 "trtype": "TCP", 00:21:07.514 "adrfam": "IPv4", 00:21:07.514 "traddr": "10.0.0.2", 00:21:07.514 "trsvcid": "4420" 00:21:07.514 }, 00:21:07.514 "peer_address": { 00:21:07.514 "trtype": "TCP", 00:21:07.514 "adrfam": "IPv4", 00:21:07.514 "traddr": "10.0.0.1", 00:21:07.514 "trsvcid": "34942" 00:21:07.514 }, 00:21:07.514 "auth": { 00:21:07.514 "state": "completed", 00:21:07.514 "digest": "sha384", 00:21:07.514 "dhgroup": "ffdhe4096" 00:21:07.514 } 00:21:07.514 } 00:21:07.514 ]' 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.514 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.773 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:07.773 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.342 16:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.602 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.862 00:21:08.862 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.862 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.863 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.863 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.863 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.863 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.863 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.124 { 00:21:09.124 "cntlid": 77, 00:21:09.124 "qid": 0, 00:21:09.124 "state": "enabled", 00:21:09.124 "thread": "nvmf_tgt_poll_group_000", 00:21:09.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:09.124 "listen_address": { 00:21:09.124 "trtype": "TCP", 00:21:09.124 "adrfam": "IPv4", 00:21:09.124 "traddr": "10.0.0.2", 00:21:09.124 "trsvcid": "4420" 00:21:09.124 }, 00:21:09.124 "peer_address": { 00:21:09.124 "trtype": "TCP", 00:21:09.124 "adrfam": "IPv4", 00:21:09.124 "traddr": "10.0.0.1", 00:21:09.124 "trsvcid": "34972" 00:21:09.124 }, 00:21:09.124 "auth": { 00:21:09.124 "state": "completed", 00:21:09.124 "digest": "sha384", 00:21:09.124 "dhgroup": "ffdhe4096" 00:21:09.124 } 00:21:09.124 } 00:21:09.124 ]' 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:09.124 16:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:09.693 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.693 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:09.693 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.693 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.953 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.212 00:21:10.213 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.213 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.213 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.472 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.472 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.472 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.472 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.472 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.472 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.472 { 00:21:10.472 "cntlid": 79, 00:21:10.472 "qid": 0, 00:21:10.472 "state": "enabled", 00:21:10.472 "thread": "nvmf_tgt_poll_group_000", 00:21:10.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:10.472 "listen_address": { 00:21:10.472 "trtype": "TCP", 00:21:10.472 "adrfam": "IPv4", 00:21:10.472 "traddr": "10.0.0.2", 00:21:10.472 "trsvcid": "4420" 00:21:10.472 }, 00:21:10.472 "peer_address": { 00:21:10.472 "trtype": "TCP", 00:21:10.472 "adrfam": "IPv4", 00:21:10.472 "traddr": "10.0.0.1", 00:21:10.472 "trsvcid": "34996" 00:21:10.472 }, 00:21:10.472 "auth": { 00:21:10.472 "state": "completed", 00:21:10.472 "digest": "sha384", 00:21:10.472 "dhgroup": "ffdhe4096" 00:21:10.472 } 00:21:10.472 } 00:21:10.472 ]' 00:21:10.472 16:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.472 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.472 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.472 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.472 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.472 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.472 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.472 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.732 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:10.732 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.304 16:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.565 00:21:11.565 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.565 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.565 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.825 { 00:21:11.825 "cntlid": 81, 00:21:11.825 "qid": 0, 00:21:11.825 "state": "enabled", 00:21:11.825 "thread": "nvmf_tgt_poll_group_000", 00:21:11.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:11.825 "listen_address": { 00:21:11.825 "trtype": "TCP", 00:21:11.825 "adrfam": "IPv4", 00:21:11.825 "traddr": "10.0.0.2", 00:21:11.825 "trsvcid": "4420" 00:21:11.825 }, 00:21:11.825 "peer_address": { 00:21:11.825 "trtype": "TCP", 00:21:11.825 "adrfam": "IPv4", 00:21:11.825 "traddr": "10.0.0.1", 00:21:11.825 "trsvcid": "35020" 00:21:11.825 }, 00:21:11.825 "auth": { 00:21:11.825 "state": "completed", 00:21:11.825 "digest": "sha384", 00:21:11.825 "dhgroup": "ffdhe6144" 00:21:11.825 } 00:21:11.825 } 00:21:11.825 ]' 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.825 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.085 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:12.085 16:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:12.652 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:12.911 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.477 00:21:13.477 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.477 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.477 16:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.477 { 00:21:13.477 "cntlid": 83, 00:21:13.477 "qid": 0, 00:21:13.477 "state": "enabled", 00:21:13.477 "thread": "nvmf_tgt_poll_group_000", 00:21:13.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:13.477 "listen_address": { 00:21:13.477 "trtype": "TCP", 00:21:13.477 "adrfam": "IPv4", 00:21:13.477 "traddr": "10.0.0.2", 00:21:13.477 "trsvcid": "4420" 00:21:13.477 }, 00:21:13.477 "peer_address": { 00:21:13.477 "trtype": "TCP", 00:21:13.477 "adrfam": "IPv4", 00:21:13.477 "traddr": "10.0.0.1", 00:21:13.477 "trsvcid": "35058" 00:21:13.477 }, 00:21:13.477 "auth": { 00:21:13.477 "state": "completed", 00:21:13.477 "digest": "sha384", 00:21:13.477 "dhgroup": "ffdhe6144" 00:21:13.477 } 00:21:13.477 } 00:21:13.477 ]' 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.477 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.737 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:13.737 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.305 16:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.565 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.825 00:21:14.825 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.825 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.825 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.085 { 00:21:15.085 "cntlid": 85, 00:21:15.085 "qid": 0, 00:21:15.085 "state": "enabled", 00:21:15.085 "thread": "nvmf_tgt_poll_group_000", 00:21:15.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:15.085 "listen_address": { 00:21:15.085 "trtype": "TCP", 00:21:15.085 "adrfam": "IPv4", 00:21:15.085 "traddr": "10.0.0.2", 00:21:15.085 "trsvcid": "4420" 00:21:15.085 }, 00:21:15.085 "peer_address": { 00:21:15.085 "trtype": "TCP", 00:21:15.085 "adrfam": "IPv4", 00:21:15.085 "traddr": "10.0.0.1", 00:21:15.085 "trsvcid": "35084" 00:21:15.085 }, 00:21:15.085 "auth": { 00:21:15.085 "state": "completed", 00:21:15.085 "digest": "sha384", 00:21:15.085 "dhgroup": "ffdhe6144" 00:21:15.085 } 00:21:15.085 } 00:21:15.085 ]' 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.085 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.344 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:15.345 16:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:15.914 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:16.485 00:21:16.485 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.485 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.485 16:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.485 { 00:21:16.485 "cntlid": 87, 00:21:16.485 "qid": 0, 00:21:16.485 "state": "enabled", 00:21:16.485 "thread": "nvmf_tgt_poll_group_000", 00:21:16.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:16.485 "listen_address": { 00:21:16.485 "trtype": "TCP", 00:21:16.485 "adrfam": "IPv4", 00:21:16.485 "traddr": "10.0.0.2", 00:21:16.485 "trsvcid": "4420" 00:21:16.485 }, 00:21:16.485 "peer_address": { 00:21:16.485 "trtype": "TCP", 00:21:16.485 "adrfam": "IPv4", 00:21:16.485 "traddr": "10.0.0.1", 00:21:16.485 "trsvcid": "35098" 00:21:16.485 }, 00:21:16.485 "auth": { 00:21:16.485 "state": "completed", 00:21:16.485 "digest": "sha384", 00:21:16.485 "dhgroup": "ffdhe6144" 00:21:16.485 } 00:21:16.485 } 00:21:16.485 ]' 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.485 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.743 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:16.743 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.309 16:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.568 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.827 00:21:18.086 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.086 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.086 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.086 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.087 { 00:21:18.087 "cntlid": 89, 00:21:18.087 "qid": 0, 00:21:18.087 "state": "enabled", 00:21:18.087 "thread": "nvmf_tgt_poll_group_000", 00:21:18.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:18.087 "listen_address": { 00:21:18.087 "trtype": "TCP", 00:21:18.087 "adrfam": "IPv4", 00:21:18.087 "traddr": "10.0.0.2", 00:21:18.087 "trsvcid": "4420" 00:21:18.087 }, 00:21:18.087 "peer_address": { 00:21:18.087 "trtype": "TCP", 00:21:18.087 "adrfam": "IPv4", 00:21:18.087 "traddr": "10.0.0.1", 00:21:18.087 "trsvcid": "48360" 00:21:18.087 }, 00:21:18.087 "auth": { 00:21:18.087 "state": "completed", 00:21:18.087 "digest": "sha384", 00:21:18.087 "dhgroup": "ffdhe8192" 00:21:18.087 } 00:21:18.087 } 00:21:18.087 ]' 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.087 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.346 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.346 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.346 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.347 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:18.347 16:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.914 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.198 16:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.504 00:21:19.504 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.504 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.504 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.775 { 00:21:19.775 "cntlid": 91, 00:21:19.775 "qid": 0, 00:21:19.775 "state": "enabled", 00:21:19.775 "thread": "nvmf_tgt_poll_group_000", 00:21:19.775 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:19.775 "listen_address": { 00:21:19.775 "trtype": "TCP", 00:21:19.775 "adrfam": "IPv4", 00:21:19.775 "traddr": "10.0.0.2", 00:21:19.775 "trsvcid": "4420" 00:21:19.775 }, 00:21:19.775 "peer_address": { 00:21:19.775 "trtype": "TCP", 00:21:19.775 "adrfam": "IPv4", 00:21:19.775 "traddr": "10.0.0.1", 00:21:19.775 "trsvcid": "48384" 00:21:19.775 }, 00:21:19.775 "auth": { 00:21:19.775 "state": "completed", 00:21:19.775 "digest": "sha384", 00:21:19.775 "dhgroup": "ffdhe8192" 00:21:19.775 } 00:21:19.775 } 00:21:19.775 ]' 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.775 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.776 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.036 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:20.036 16:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:20.604 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.604 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:20.605 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.605 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.605 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.605 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.605 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.605 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.862 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.430 00:21:21.430 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.430 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.430 16:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.430 { 00:21:21.430 "cntlid": 93, 00:21:21.430 "qid": 0, 00:21:21.430 "state": "enabled", 00:21:21.430 "thread": "nvmf_tgt_poll_group_000", 00:21:21.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:21.430 "listen_address": { 00:21:21.430 "trtype": "TCP", 00:21:21.430 "adrfam": "IPv4", 00:21:21.430 "traddr": "10.0.0.2", 00:21:21.430 "trsvcid": "4420" 00:21:21.430 }, 00:21:21.430 "peer_address": { 00:21:21.430 "trtype": "TCP", 00:21:21.430 "adrfam": "IPv4", 00:21:21.430 "traddr": "10.0.0.1", 00:21:21.430 "trsvcid": "48414" 00:21:21.430 }, 00:21:21.430 "auth": { 00:21:21.430 "state": "completed", 00:21:21.430 "digest": "sha384", 00:21:21.430 "dhgroup": "ffdhe8192" 00:21:21.430 } 00:21:21.430 } 00:21:21.430 ]' 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.430 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.431 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.431 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.431 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.431 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.689 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:21.689 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.257 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.517 16:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:22.777 00:21:22.778 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.778 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.778 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.037 { 00:21:23.037 "cntlid": 95, 00:21:23.037 "qid": 0, 00:21:23.037 "state": "enabled", 00:21:23.037 "thread": "nvmf_tgt_poll_group_000", 00:21:23.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:23.037 "listen_address": { 00:21:23.037 "trtype": "TCP", 00:21:23.037 "adrfam": "IPv4", 00:21:23.037 "traddr": "10.0.0.2", 00:21:23.037 "trsvcid": "4420" 00:21:23.037 }, 00:21:23.037 "peer_address": { 00:21:23.037 "trtype": "TCP", 00:21:23.037 "adrfam": "IPv4", 00:21:23.037 "traddr": "10.0.0.1", 00:21:23.037 "trsvcid": "48450" 00:21:23.037 }, 00:21:23.037 "auth": { 00:21:23.037 "state": "completed", 00:21:23.037 "digest": "sha384", 00:21:23.037 "dhgroup": "ffdhe8192" 00:21:23.037 } 00:21:23.037 } 00:21:23.037 ]' 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.037 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.295 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:23.295 16:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.861 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.119 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.379 00:21:24.380 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.380 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.380 16:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.380 { 00:21:24.380 "cntlid": 97, 00:21:24.380 "qid": 0, 00:21:24.380 "state": "enabled", 00:21:24.380 "thread": "nvmf_tgt_poll_group_000", 00:21:24.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:24.380 "listen_address": { 00:21:24.380 "trtype": "TCP", 00:21:24.380 "adrfam": "IPv4", 00:21:24.380 "traddr": "10.0.0.2", 00:21:24.380 "trsvcid": "4420" 00:21:24.380 }, 00:21:24.380 "peer_address": { 00:21:24.380 "trtype": "TCP", 00:21:24.380 "adrfam": "IPv4", 00:21:24.380 "traddr": "10.0.0.1", 00:21:24.380 "trsvcid": "48472" 00:21:24.380 }, 00:21:24.380 "auth": { 00:21:24.380 "state": "completed", 00:21:24.380 "digest": "sha512", 00:21:24.380 "dhgroup": "null" 00:21:24.380 } 00:21:24.380 } 00:21:24.380 ]' 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.380 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.640 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:24.640 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.640 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.640 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.640 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.640 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:24.640 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.209 16:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.469 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.730 00:21:25.730 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.730 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.730 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.990 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.991 { 00:21:25.991 "cntlid": 99, 00:21:25.991 "qid": 0, 00:21:25.991 "state": "enabled", 00:21:25.991 "thread": "nvmf_tgt_poll_group_000", 00:21:25.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:25.991 "listen_address": { 00:21:25.991 "trtype": "TCP", 00:21:25.991 "adrfam": "IPv4", 00:21:25.991 "traddr": "10.0.0.2", 00:21:25.991 "trsvcid": "4420" 00:21:25.991 }, 00:21:25.991 "peer_address": { 00:21:25.991 "trtype": "TCP", 00:21:25.991 "adrfam": "IPv4", 00:21:25.991 "traddr": "10.0.0.1", 00:21:25.991 "trsvcid": "48512" 00:21:25.991 }, 00:21:25.991 "auth": { 00:21:25.991 "state": "completed", 00:21:25.991 "digest": "sha512", 00:21:25.991 "dhgroup": "null" 00:21:25.991 } 00:21:25.991 } 00:21:25.991 ]' 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.991 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.251 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:26.251 16:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.833 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.092 00:21:27.092 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.092 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.092 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.351 { 00:21:27.351 "cntlid": 101, 00:21:27.351 "qid": 0, 00:21:27.351 "state": "enabled", 00:21:27.351 "thread": "nvmf_tgt_poll_group_000", 00:21:27.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:27.351 "listen_address": { 00:21:27.351 "trtype": "TCP", 00:21:27.351 "adrfam": "IPv4", 00:21:27.351 "traddr": "10.0.0.2", 00:21:27.351 "trsvcid": "4420" 00:21:27.351 }, 00:21:27.351 "peer_address": { 00:21:27.351 "trtype": "TCP", 00:21:27.351 "adrfam": "IPv4", 00:21:27.351 "traddr": "10.0.0.1", 00:21:27.351 "trsvcid": "48532" 00:21:27.351 }, 00:21:27.351 "auth": { 00:21:27.351 "state": "completed", 00:21:27.351 "digest": "sha512", 00:21:27.351 "dhgroup": "null" 00:21:27.351 } 00:21:27.351 } 00:21:27.351 ]' 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.351 16:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.611 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:27.611 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.178 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.438 16:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.438 00:21:28.438 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.438 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.438 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.698 { 00:21:28.698 "cntlid": 103, 00:21:28.698 "qid": 0, 00:21:28.698 "state": "enabled", 00:21:28.698 "thread": "nvmf_tgt_poll_group_000", 00:21:28.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:28.698 "listen_address": { 00:21:28.698 "trtype": "TCP", 00:21:28.698 "adrfam": "IPv4", 00:21:28.698 "traddr": "10.0.0.2", 00:21:28.698 "trsvcid": "4420" 00:21:28.698 }, 00:21:28.698 "peer_address": { 00:21:28.698 "trtype": "TCP", 00:21:28.698 "adrfam": "IPv4", 00:21:28.698 "traddr": "10.0.0.1", 00:21:28.698 "trsvcid": "59768" 00:21:28.698 }, 00:21:28.698 "auth": { 00:21:28.698 "state": "completed", 00:21:28.698 "digest": "sha512", 00:21:28.698 "dhgroup": "null" 00:21:28.698 } 00:21:28.698 } 00:21:28.698 ]' 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.698 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.958 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:28.958 16:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:29.526 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.526 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:29.526 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.527 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.527 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.527 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.527 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.527 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.527 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.787 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.047 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.047 { 00:21:30.047 "cntlid": 105, 00:21:30.047 "qid": 0, 00:21:30.047 "state": "enabled", 00:21:30.047 "thread": "nvmf_tgt_poll_group_000", 00:21:30.047 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:30.047 "listen_address": { 00:21:30.047 "trtype": "TCP", 00:21:30.047 "adrfam": "IPv4", 00:21:30.047 "traddr": "10.0.0.2", 00:21:30.047 "trsvcid": "4420" 00:21:30.047 }, 00:21:30.047 "peer_address": { 00:21:30.047 "trtype": "TCP", 00:21:30.047 "adrfam": "IPv4", 00:21:30.047 "traddr": "10.0.0.1", 00:21:30.047 "trsvcid": "59792" 00:21:30.047 }, 00:21:30.047 "auth": { 00:21:30.047 "state": "completed", 00:21:30.047 "digest": "sha512", 00:21:30.047 "dhgroup": "ffdhe2048" 00:21:30.047 } 00:21:30.047 } 00:21:30.047 ]' 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.047 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.306 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:30.306 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.306 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.306 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.306 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.306 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:30.306 16:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:30.872 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.132 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.391 00:21:31.391 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.391 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.391 16:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.650 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.650 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.651 { 00:21:31.651 "cntlid": 107, 00:21:31.651 "qid": 0, 00:21:31.651 "state": "enabled", 00:21:31.651 "thread": "nvmf_tgt_poll_group_000", 00:21:31.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:31.651 "listen_address": { 00:21:31.651 "trtype": "TCP", 00:21:31.651 "adrfam": "IPv4", 00:21:31.651 "traddr": "10.0.0.2", 00:21:31.651 "trsvcid": "4420" 00:21:31.651 }, 00:21:31.651 "peer_address": { 00:21:31.651 "trtype": "TCP", 00:21:31.651 "adrfam": "IPv4", 00:21:31.651 "traddr": "10.0.0.1", 00:21:31.651 "trsvcid": "59818" 00:21:31.651 }, 00:21:31.651 "auth": { 00:21:31.651 "state": "completed", 00:21:31.651 "digest": "sha512", 00:21:31.651 "dhgroup": "ffdhe2048" 00:21:31.651 } 00:21:31.651 } 00:21:31.651 ]' 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.651 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.911 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:31.911 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.479 16:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.479 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:32.738 00:21:32.738 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.738 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.738 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.997 { 00:21:32.997 "cntlid": 109, 00:21:32.997 "qid": 0, 00:21:32.997 "state": "enabled", 00:21:32.997 "thread": "nvmf_tgt_poll_group_000", 00:21:32.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:32.997 "listen_address": { 00:21:32.997 "trtype": "TCP", 00:21:32.997 "adrfam": "IPv4", 00:21:32.997 "traddr": "10.0.0.2", 00:21:32.997 "trsvcid": "4420" 00:21:32.997 }, 00:21:32.997 "peer_address": { 00:21:32.997 "trtype": "TCP", 00:21:32.997 "adrfam": "IPv4", 00:21:32.997 "traddr": "10.0.0.1", 00:21:32.997 "trsvcid": "59846" 00:21:32.997 }, 00:21:32.997 "auth": { 00:21:32.997 "state": "completed", 00:21:32.997 "digest": "sha512", 00:21:32.997 "dhgroup": "ffdhe2048" 00:21:32.997 } 00:21:32.997 } 00:21:32.997 ]' 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.997 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.256 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:33.256 16:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:33.823 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.082 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.342 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.342 { 00:21:34.342 "cntlid": 111, 00:21:34.342 "qid": 0, 00:21:34.342 "state": "enabled", 00:21:34.342 "thread": "nvmf_tgt_poll_group_000", 00:21:34.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:34.342 "listen_address": { 00:21:34.342 "trtype": "TCP", 00:21:34.342 "adrfam": "IPv4", 00:21:34.342 "traddr": "10.0.0.2", 00:21:34.342 "trsvcid": "4420" 00:21:34.342 }, 00:21:34.342 "peer_address": { 00:21:34.342 "trtype": "TCP", 00:21:34.342 "adrfam": "IPv4", 00:21:34.342 "traddr": "10.0.0.1", 00:21:34.342 "trsvcid": "59884" 00:21:34.342 }, 00:21:34.342 "auth": { 00:21:34.342 "state": "completed", 00:21:34.342 "digest": "sha512", 00:21:34.342 "dhgroup": "ffdhe2048" 00:21:34.342 } 00:21:34.342 } 00:21:34.342 ]' 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.342 16:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.342 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.342 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:34.342 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.601 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.601 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.601 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.601 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:34.601 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.169 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.428 16:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.686 00:21:35.686 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.686 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.687 { 00:21:35.687 "cntlid": 113, 00:21:35.687 "qid": 0, 00:21:35.687 "state": "enabled", 00:21:35.687 "thread": "nvmf_tgt_poll_group_000", 00:21:35.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:35.687 "listen_address": { 00:21:35.687 "trtype": "TCP", 00:21:35.687 "adrfam": "IPv4", 00:21:35.687 "traddr": "10.0.0.2", 00:21:35.687 "trsvcid": "4420" 00:21:35.687 }, 00:21:35.687 "peer_address": { 00:21:35.687 "trtype": "TCP", 00:21:35.687 "adrfam": "IPv4", 00:21:35.687 "traddr": "10.0.0.1", 00:21:35.687 "trsvcid": "59902" 00:21:35.687 }, 00:21:35.687 "auth": { 00:21:35.687 "state": "completed", 00:21:35.687 "digest": "sha512", 00:21:35.687 "dhgroup": "ffdhe3072" 00:21:35.687 } 00:21:35.687 } 00:21:35.687 ]' 00:21:35.687 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.946 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.946 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.946 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:35.946 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.946 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.946 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.947 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.947 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:35.947 16:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:36.516 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.517 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:36.517 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.517 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.517 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.517 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.517 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.517 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.775 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.033 00:21:37.033 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.033 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.033 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.291 { 00:21:37.291 "cntlid": 115, 00:21:37.291 "qid": 0, 00:21:37.291 "state": "enabled", 00:21:37.291 "thread": "nvmf_tgt_poll_group_000", 00:21:37.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:37.291 "listen_address": { 00:21:37.291 "trtype": "TCP", 00:21:37.291 "adrfam": "IPv4", 00:21:37.291 "traddr": "10.0.0.2", 00:21:37.291 "trsvcid": "4420" 00:21:37.291 }, 00:21:37.291 "peer_address": { 00:21:37.291 "trtype": "TCP", 00:21:37.291 "adrfam": "IPv4", 00:21:37.291 "traddr": "10.0.0.1", 00:21:37.291 "trsvcid": "59934" 00:21:37.291 }, 00:21:37.291 "auth": { 00:21:37.291 "state": "completed", 00:21:37.291 "digest": "sha512", 00:21:37.291 "dhgroup": "ffdhe3072" 00:21:37.291 } 00:21:37.291 } 00:21:37.291 ]' 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.291 16:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.549 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:37.549 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.116 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.375 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.375 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.375 16:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.376 00:21:38.376 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.376 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.376 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.635 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.635 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.635 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.635 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.635 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.635 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.635 { 00:21:38.635 "cntlid": 117, 00:21:38.635 "qid": 0, 00:21:38.635 "state": "enabled", 00:21:38.635 "thread": "nvmf_tgt_poll_group_000", 00:21:38.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:38.635 "listen_address": { 00:21:38.635 "trtype": "TCP", 00:21:38.635 "adrfam": "IPv4", 00:21:38.635 "traddr": "10.0.0.2", 00:21:38.635 "trsvcid": "4420" 00:21:38.635 }, 00:21:38.635 "peer_address": { 00:21:38.635 "trtype": "TCP", 00:21:38.635 "adrfam": "IPv4", 00:21:38.635 "traddr": "10.0.0.1", 00:21:38.635 "trsvcid": "53098" 00:21:38.635 }, 00:21:38.635 "auth": { 00:21:38.635 "state": "completed", 00:21:38.635 "digest": "sha512", 00:21:38.635 "dhgroup": "ffdhe3072" 00:21:38.635 } 00:21:38.635 } 00:21:38.635 ]' 00:21:38.635 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.636 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.636 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.636 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.636 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.636 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.636 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.636 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.896 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:38.896 16:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.465 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.725 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.986 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.986 { 00:21:39.986 "cntlid": 119, 00:21:39.986 "qid": 0, 00:21:39.986 "state": "enabled", 00:21:39.986 "thread": "nvmf_tgt_poll_group_000", 00:21:39.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:39.986 "listen_address": { 00:21:39.986 "trtype": "TCP", 00:21:39.986 "adrfam": "IPv4", 00:21:39.986 "traddr": "10.0.0.2", 00:21:39.986 "trsvcid": "4420" 00:21:39.986 }, 00:21:39.986 "peer_address": { 00:21:39.986 "trtype": "TCP", 00:21:39.986 "adrfam": "IPv4", 00:21:39.986 "traddr": "10.0.0.1", 00:21:39.986 "trsvcid": "53122" 00:21:39.986 }, 00:21:39.986 "auth": { 00:21:39.986 "state": "completed", 00:21:39.986 "digest": "sha512", 00:21:39.986 "dhgroup": "ffdhe3072" 00:21:39.986 } 00:21:39.986 } 00:21:39.986 ]' 00:21:39.986 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:40.245 16:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:40.814 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.074 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:41.074 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.074 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.074 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.074 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.074 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.074 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.075 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.334 00:21:41.335 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.335 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.335 16:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.595 { 00:21:41.595 "cntlid": 121, 00:21:41.595 "qid": 0, 00:21:41.595 "state": "enabled", 00:21:41.595 "thread": "nvmf_tgt_poll_group_000", 00:21:41.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:41.595 "listen_address": { 00:21:41.595 "trtype": "TCP", 00:21:41.595 "adrfam": "IPv4", 00:21:41.595 "traddr": "10.0.0.2", 00:21:41.595 "trsvcid": "4420" 00:21:41.595 }, 00:21:41.595 "peer_address": { 00:21:41.595 "trtype": "TCP", 00:21:41.595 "adrfam": "IPv4", 00:21:41.595 "traddr": "10.0.0.1", 00:21:41.595 "trsvcid": "53144" 00:21:41.595 }, 00:21:41.595 "auth": { 00:21:41.595 "state": "completed", 00:21:41.595 "digest": "sha512", 00:21:41.595 "dhgroup": "ffdhe4096" 00:21:41.595 } 00:21:41.595 } 00:21:41.595 ]' 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.595 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.856 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:41.856 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:42.424 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.424 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:42.424 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.424 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.424 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.424 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.424 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.425 16:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.685 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.944 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.944 { 00:21:42.944 "cntlid": 123, 00:21:42.944 "qid": 0, 00:21:42.944 "state": "enabled", 00:21:42.944 "thread": "nvmf_tgt_poll_group_000", 00:21:42.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:42.944 "listen_address": { 00:21:42.944 "trtype": "TCP", 00:21:42.944 "adrfam": "IPv4", 00:21:42.944 "traddr": "10.0.0.2", 00:21:42.944 "trsvcid": "4420" 00:21:42.944 }, 00:21:42.944 "peer_address": { 00:21:42.944 "trtype": "TCP", 00:21:42.944 "adrfam": "IPv4", 00:21:42.944 "traddr": "10.0.0.1", 00:21:42.944 "trsvcid": "53152" 00:21:42.944 }, 00:21:42.944 "auth": { 00:21:42.944 "state": "completed", 00:21:42.944 "digest": "sha512", 00:21:42.944 "dhgroup": "ffdhe4096" 00:21:42.944 } 00:21:42.944 } 00:21:42.944 ]' 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.944 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.203 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.203 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.203 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.203 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.203 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.203 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:43.203 16:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:43.772 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.772 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:43.772 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.772 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.031 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.290 00:21:44.290 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.290 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.290 16:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.548 { 00:21:44.548 "cntlid": 125, 00:21:44.548 "qid": 0, 00:21:44.548 "state": "enabled", 00:21:44.548 "thread": "nvmf_tgt_poll_group_000", 00:21:44.548 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:44.548 "listen_address": { 00:21:44.548 "trtype": "TCP", 00:21:44.548 "adrfam": "IPv4", 00:21:44.548 "traddr": "10.0.0.2", 00:21:44.548 "trsvcid": "4420" 00:21:44.548 }, 00:21:44.548 "peer_address": { 00:21:44.548 "trtype": "TCP", 00:21:44.548 "adrfam": "IPv4", 00:21:44.548 "traddr": "10.0.0.1", 00:21:44.548 "trsvcid": "53178" 00:21:44.548 }, 00:21:44.548 "auth": { 00:21:44.548 "state": "completed", 00:21:44.548 "digest": "sha512", 00:21:44.548 "dhgroup": "ffdhe4096" 00:21:44.548 } 00:21:44.548 } 00:21:44.548 ]' 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.548 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.806 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:44.806 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.374 16:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.633 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.891 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.892 { 00:21:45.892 "cntlid": 127, 00:21:45.892 "qid": 0, 00:21:45.892 "state": "enabled", 00:21:45.892 "thread": "nvmf_tgt_poll_group_000", 00:21:45.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:45.892 "listen_address": { 00:21:45.892 "trtype": "TCP", 00:21:45.892 "adrfam": "IPv4", 00:21:45.892 "traddr": "10.0.0.2", 00:21:45.892 "trsvcid": "4420" 00:21:45.892 }, 00:21:45.892 "peer_address": { 00:21:45.892 "trtype": "TCP", 00:21:45.892 "adrfam": "IPv4", 00:21:45.892 "traddr": "10.0.0.1", 00:21:45.892 "trsvcid": "53204" 00:21:45.892 }, 00:21:45.892 "auth": { 00:21:45.892 "state": "completed", 00:21:45.892 "digest": "sha512", 00:21:45.892 "dhgroup": "ffdhe4096" 00:21:45.892 } 00:21:45.892 } 00:21:45.892 ]' 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.892 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.151 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.151 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.151 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.151 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:46.151 16:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.718 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:46.976 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.234 00:21:47.234 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.234 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.234 16:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.493 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.493 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.493 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.493 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.493 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.493 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.493 { 00:21:47.493 "cntlid": 129, 00:21:47.493 "qid": 0, 00:21:47.493 "state": "enabled", 00:21:47.493 "thread": "nvmf_tgt_poll_group_000", 00:21:47.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:47.493 "listen_address": { 00:21:47.493 "trtype": "TCP", 00:21:47.493 "adrfam": "IPv4", 00:21:47.493 "traddr": "10.0.0.2", 00:21:47.493 "trsvcid": "4420" 00:21:47.493 }, 00:21:47.493 "peer_address": { 00:21:47.493 "trtype": "TCP", 00:21:47.493 "adrfam": "IPv4", 00:21:47.493 "traddr": "10.0.0.1", 00:21:47.493 "trsvcid": "53224" 00:21:47.493 }, 00:21:47.493 "auth": { 00:21:47.493 "state": "completed", 00:21:47.493 "digest": "sha512", 00:21:47.493 "dhgroup": "ffdhe6144" 00:21:47.493 } 00:21:47.494 } 00:21:47.494 ]' 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.494 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.770 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:47.770 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.339 16:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.598 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.858 00:21:48.858 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.858 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.858 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.117 { 00:21:49.117 "cntlid": 131, 00:21:49.117 "qid": 0, 00:21:49.117 "state": "enabled", 00:21:49.117 "thread": "nvmf_tgt_poll_group_000", 00:21:49.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:49.117 "listen_address": { 00:21:49.117 "trtype": "TCP", 00:21:49.117 "adrfam": "IPv4", 00:21:49.117 "traddr": "10.0.0.2", 00:21:49.117 "trsvcid": "4420" 00:21:49.117 }, 00:21:49.117 "peer_address": { 00:21:49.117 "trtype": "TCP", 00:21:49.117 "adrfam": "IPv4", 00:21:49.117 "traddr": "10.0.0.1", 00:21:49.117 "trsvcid": "41052" 00:21:49.117 }, 00:21:49.117 "auth": { 00:21:49.117 "state": "completed", 00:21:49.117 "digest": "sha512", 00:21:49.117 "dhgroup": "ffdhe6144" 00:21:49.117 } 00:21:49.117 } 00:21:49.117 ]' 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.117 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.376 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:49.376 16:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:49.944 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.945 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.515 00:21:50.515 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.515 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.515 16:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.515 { 00:21:50.515 "cntlid": 133, 00:21:50.515 "qid": 0, 00:21:50.515 "state": "enabled", 00:21:50.515 "thread": "nvmf_tgt_poll_group_000", 00:21:50.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:50.515 "listen_address": { 00:21:50.515 "trtype": "TCP", 00:21:50.515 "adrfam": "IPv4", 00:21:50.515 "traddr": "10.0.0.2", 00:21:50.515 "trsvcid": "4420" 00:21:50.515 }, 00:21:50.515 "peer_address": { 00:21:50.515 "trtype": "TCP", 00:21:50.515 "adrfam": "IPv4", 00:21:50.515 "traddr": "10.0.0.1", 00:21:50.515 "trsvcid": "41076" 00:21:50.515 }, 00:21:50.515 "auth": { 00:21:50.515 "state": "completed", 00:21:50.515 "digest": "sha512", 00:21:50.515 "dhgroup": "ffdhe6144" 00:21:50.515 } 00:21:50.515 } 00:21:50.515 ]' 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.515 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.774 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:50.774 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.344 16:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.604 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.864 00:21:51.864 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.864 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.864 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.124 { 00:21:52.124 "cntlid": 135, 00:21:52.124 "qid": 0, 00:21:52.124 "state": "enabled", 00:21:52.124 "thread": "nvmf_tgt_poll_group_000", 00:21:52.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:52.124 "listen_address": { 00:21:52.124 "trtype": "TCP", 00:21:52.124 "adrfam": "IPv4", 00:21:52.124 "traddr": "10.0.0.2", 00:21:52.124 "trsvcid": "4420" 00:21:52.124 }, 00:21:52.124 "peer_address": { 00:21:52.124 "trtype": "TCP", 00:21:52.124 "adrfam": "IPv4", 00:21:52.124 "traddr": "10.0.0.1", 00:21:52.124 "trsvcid": "41104" 00:21:52.124 }, 00:21:52.124 "auth": { 00:21:52.124 "state": "completed", 00:21:52.124 "digest": "sha512", 00:21:52.124 "dhgroup": "ffdhe6144" 00:21:52.124 } 00:21:52.124 } 00:21:52.124 ]' 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.124 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.385 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:52.385 16:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.955 16:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.526 00:21:53.526 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.526 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.526 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.786 { 00:21:53.786 "cntlid": 137, 00:21:53.786 "qid": 0, 00:21:53.786 "state": "enabled", 00:21:53.786 "thread": "nvmf_tgt_poll_group_000", 00:21:53.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:53.786 "listen_address": { 00:21:53.786 "trtype": "TCP", 00:21:53.786 "adrfam": "IPv4", 00:21:53.786 "traddr": "10.0.0.2", 00:21:53.786 "trsvcid": "4420" 00:21:53.786 }, 00:21:53.786 "peer_address": { 00:21:53.786 "trtype": "TCP", 00:21:53.786 "adrfam": "IPv4", 00:21:53.786 "traddr": "10.0.0.1", 00:21:53.786 "trsvcid": "41130" 00:21:53.786 }, 00:21:53.786 "auth": { 00:21:53.786 "state": "completed", 00:21:53.786 "digest": "sha512", 00:21:53.786 "dhgroup": "ffdhe8192" 00:21:53.786 } 00:21:53.786 } 00:21:53.786 ]' 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.786 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.045 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:54.045 16:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.615 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.874 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.875 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.875 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.875 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.875 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.134 00:21:55.134 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.134 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.134 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.394 { 00:21:55.394 "cntlid": 139, 00:21:55.394 "qid": 0, 00:21:55.394 "state": "enabled", 00:21:55.394 "thread": "nvmf_tgt_poll_group_000", 00:21:55.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:55.394 "listen_address": { 00:21:55.394 "trtype": "TCP", 00:21:55.394 "adrfam": "IPv4", 00:21:55.394 "traddr": "10.0.0.2", 00:21:55.394 "trsvcid": "4420" 00:21:55.394 }, 00:21:55.394 "peer_address": { 00:21:55.394 "trtype": "TCP", 00:21:55.394 "adrfam": "IPv4", 00:21:55.394 "traddr": "10.0.0.1", 00:21:55.394 "trsvcid": "41160" 00:21:55.394 }, 00:21:55.394 "auth": { 00:21:55.394 "state": "completed", 00:21:55.394 "digest": "sha512", 00:21:55.394 "dhgroup": "ffdhe8192" 00:21:55.394 } 00:21:55.394 } 00:21:55.394 ]' 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:55.394 16:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.394 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.394 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.394 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.654 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:55.654 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: --dhchap-ctrl-secret DHHC-1:02:ZTNjNTA5NGM2MmEwNThhMGU4YTgwNGY5YzczNmEzY2M2ZWNjODNlYzRkMmE3NTk5fKqQ/A==: 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.225 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.487 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.488 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.488 16:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.792 00:21:56.792 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.792 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.792 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.102 { 00:21:57.102 "cntlid": 141, 00:21:57.102 "qid": 0, 00:21:57.102 "state": "enabled", 00:21:57.102 "thread": "nvmf_tgt_poll_group_000", 00:21:57.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:57.102 "listen_address": { 00:21:57.102 "trtype": "TCP", 00:21:57.102 "adrfam": "IPv4", 00:21:57.102 "traddr": "10.0.0.2", 00:21:57.102 "trsvcid": "4420" 00:21:57.102 }, 00:21:57.102 "peer_address": { 00:21:57.102 "trtype": "TCP", 00:21:57.102 "adrfam": "IPv4", 00:21:57.102 "traddr": "10.0.0.1", 00:21:57.102 "trsvcid": "41180" 00:21:57.102 }, 00:21:57.102 "auth": { 00:21:57.102 "state": "completed", 00:21:57.102 "digest": "sha512", 00:21:57.102 "dhgroup": "ffdhe8192" 00:21:57.102 } 00:21:57.102 } 00:21:57.102 ]' 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.102 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.376 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:57.376 16:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:01:ZWM4OGUxMjk5NDhhMDA3MmY3ZjRmNDNjZGJmZTZkY2a32Aie: 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.945 16:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.514 00:21:58.514 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.514 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.514 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.774 { 00:21:58.774 "cntlid": 143, 00:21:58.774 "qid": 0, 00:21:58.774 "state": "enabled", 00:21:58.774 "thread": "nvmf_tgt_poll_group_000", 00:21:58.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:21:58.774 "listen_address": { 00:21:58.774 "trtype": "TCP", 00:21:58.774 "adrfam": "IPv4", 00:21:58.774 "traddr": "10.0.0.2", 00:21:58.774 "trsvcid": "4420" 00:21:58.774 }, 00:21:58.774 "peer_address": { 00:21:58.774 "trtype": "TCP", 00:21:58.774 "adrfam": "IPv4", 00:21:58.774 "traddr": "10.0.0.1", 00:21:58.774 "trsvcid": "46958" 00:21:58.774 }, 00:21:58.774 "auth": { 00:21:58.774 "state": "completed", 00:21:58.774 "digest": "sha512", 00:21:58.774 "dhgroup": "ffdhe8192" 00:21:58.774 } 00:21:58.774 } 00:21:58.774 ]' 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.774 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.033 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:59.033 16:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.602 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.861 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.861 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.861 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.861 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.122 00:22:00.122 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.122 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.122 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.381 { 00:22:00.381 "cntlid": 145, 00:22:00.381 "qid": 0, 00:22:00.381 "state": "enabled", 00:22:00.381 "thread": "nvmf_tgt_poll_group_000", 00:22:00.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:00.381 "listen_address": { 00:22:00.381 "trtype": "TCP", 00:22:00.381 "adrfam": "IPv4", 00:22:00.381 "traddr": "10.0.0.2", 00:22:00.381 "trsvcid": "4420" 00:22:00.381 }, 00:22:00.381 "peer_address": { 00:22:00.381 "trtype": "TCP", 00:22:00.381 "adrfam": "IPv4", 00:22:00.381 "traddr": "10.0.0.1", 00:22:00.381 "trsvcid": "46982" 00:22:00.381 }, 00:22:00.381 "auth": { 00:22:00.381 "state": "completed", 00:22:00.381 "digest": "sha512", 00:22:00.381 "dhgroup": "ffdhe8192" 00:22:00.381 } 00:22:00.381 } 00:22:00.381 ]' 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.381 16:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.381 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.381 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.381 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.640 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:22:00.640 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:ODY1MjdiZWQyOTczYzcxMTI2NWVhZjE2NzczOWRhZGFmYWUyNTM2ZjhjMDFjYWNmbRDzpg==: --dhchap-ctrl-secret DHHC-1:03:Mjk0MGU3Njk0OGE0ZjQzOTY4YTRhOTU0NDBhMWM2NWYxMmJmYWMyYWQ0NWUxMDdhMzljOWQ4ODdmZDFjMDZkY3dwmw0=: 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:01.209 16:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:01.778 request: 00:22:01.778 { 00:22:01.778 "name": "nvme0", 00:22:01.778 "trtype": "tcp", 00:22:01.778 "traddr": "10.0.0.2", 00:22:01.778 "adrfam": "ipv4", 00:22:01.778 "trsvcid": "4420", 00:22:01.778 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:01.778 "prchk_reftag": false, 00:22:01.778 "prchk_guard": false, 00:22:01.778 "hdgst": false, 00:22:01.778 "ddgst": false, 00:22:01.778 "dhchap_key": "key2", 00:22:01.778 "allow_unrecognized_csi": false, 00:22:01.778 "method": "bdev_nvme_attach_controller", 00:22:01.778 "req_id": 1 00:22:01.778 } 00:22:01.778 Got JSON-RPC error response 00:22:01.778 response: 00:22:01.778 { 00:22:01.778 "code": -5, 00:22:01.778 "message": "Input/output error" 00:22:01.778 } 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.778 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:02.037 request: 00:22:02.037 { 00:22:02.037 "name": "nvme0", 00:22:02.037 "trtype": "tcp", 00:22:02.037 "traddr": "10.0.0.2", 00:22:02.037 "adrfam": "ipv4", 00:22:02.037 "trsvcid": "4420", 00:22:02.037 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:02.037 "prchk_reftag": false, 00:22:02.037 "prchk_guard": false, 00:22:02.037 "hdgst": false, 00:22:02.037 "ddgst": false, 00:22:02.037 "dhchap_key": "key1", 00:22:02.037 "dhchap_ctrlr_key": "ckey2", 00:22:02.037 "allow_unrecognized_csi": false, 00:22:02.037 "method": "bdev_nvme_attach_controller", 00:22:02.037 "req_id": 1 00:22:02.037 } 00:22:02.037 Got JSON-RPC error response 00:22:02.037 response: 00:22:02.037 { 00:22:02.037 "code": -5, 00:22:02.037 "message": "Input/output error" 00:22:02.037 } 00:22:02.037 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.037 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.037 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.038 16:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.606 request: 00:22:02.606 { 00:22:02.606 "name": "nvme0", 00:22:02.606 "trtype": "tcp", 00:22:02.606 "traddr": "10.0.0.2", 00:22:02.606 "adrfam": "ipv4", 00:22:02.606 "trsvcid": "4420", 00:22:02.606 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:02.606 "prchk_reftag": false, 00:22:02.606 "prchk_guard": false, 00:22:02.606 "hdgst": false, 00:22:02.606 "ddgst": false, 00:22:02.606 "dhchap_key": "key1", 00:22:02.606 "dhchap_ctrlr_key": "ckey1", 00:22:02.606 "allow_unrecognized_csi": false, 00:22:02.606 "method": "bdev_nvme_attach_controller", 00:22:02.606 "req_id": 1 00:22:02.606 } 00:22:02.606 Got JSON-RPC error response 00:22:02.606 response: 00:22:02.606 { 00:22:02.606 "code": -5, 00:22:02.606 "message": "Input/output error" 00:22:02.606 } 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2228511 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2228511 ']' 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2228511 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2228511 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2228511' 00:22:02.606 killing process with pid 2228511 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2228511 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2228511 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2254342 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2254342 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2254342 ']' 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.606 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2254342 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2254342 ']' 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.865 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 null0 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.f5v 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.olj ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.olj 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.utG 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.VaI ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VaI 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.RRD 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.TII ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TII 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Vy8 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.125 16:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.065 nvme0n1 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.066 { 00:22:04.066 "cntlid": 1, 00:22:04.066 "qid": 0, 00:22:04.066 "state": "enabled", 00:22:04.066 "thread": "nvmf_tgt_poll_group_000", 00:22:04.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:04.066 "listen_address": { 00:22:04.066 "trtype": "TCP", 00:22:04.066 "adrfam": "IPv4", 00:22:04.066 "traddr": "10.0.0.2", 00:22:04.066 "trsvcid": "4420" 00:22:04.066 }, 00:22:04.066 "peer_address": { 00:22:04.066 "trtype": "TCP", 00:22:04.066 "adrfam": "IPv4", 00:22:04.066 "traddr": "10.0.0.1", 00:22:04.066 "trsvcid": "47052" 00:22:04.066 }, 00:22:04.066 "auth": { 00:22:04.066 "state": "completed", 00:22:04.066 "digest": "sha512", 00:22:04.066 "dhgroup": "ffdhe8192" 00:22:04.066 } 00:22:04.066 } 00:22:04.066 ]' 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.066 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.325 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:22:04.325 16:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:04.894 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.153 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.412 request: 00:22:05.412 { 00:22:05.412 "name": "nvme0", 00:22:05.412 "trtype": "tcp", 00:22:05.412 "traddr": "10.0.0.2", 00:22:05.412 "adrfam": "ipv4", 00:22:05.412 "trsvcid": "4420", 00:22:05.412 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:05.412 "prchk_reftag": false, 00:22:05.412 "prchk_guard": false, 00:22:05.412 "hdgst": false, 00:22:05.412 "ddgst": false, 00:22:05.412 "dhchap_key": "key3", 00:22:05.412 "allow_unrecognized_csi": false, 00:22:05.412 "method": "bdev_nvme_attach_controller", 00:22:05.412 "req_id": 1 00:22:05.412 } 00:22:05.412 Got JSON-RPC error response 00:22:05.412 response: 00:22:05.412 { 00:22:05.412 "code": -5, 00:22:05.412 "message": "Input/output error" 00:22:05.412 } 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.412 16:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.412 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.671 request: 00:22:05.671 { 00:22:05.671 "name": "nvme0", 00:22:05.671 "trtype": "tcp", 00:22:05.671 "traddr": "10.0.0.2", 00:22:05.672 "adrfam": "ipv4", 00:22:05.672 "trsvcid": "4420", 00:22:05.672 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:05.672 "prchk_reftag": false, 00:22:05.672 "prchk_guard": false, 00:22:05.672 "hdgst": false, 00:22:05.672 "ddgst": false, 00:22:05.672 "dhchap_key": "key3", 00:22:05.672 "allow_unrecognized_csi": false, 00:22:05.672 "method": "bdev_nvme_attach_controller", 00:22:05.672 "req_id": 1 00:22:05.672 } 00:22:05.672 Got JSON-RPC error response 00:22:05.672 response: 00:22:05.672 { 00:22:05.672 "code": -5, 00:22:05.672 "message": "Input/output error" 00:22:05.672 } 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.672 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.932 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.193 request: 00:22:06.193 { 00:22:06.193 "name": "nvme0", 00:22:06.193 "trtype": "tcp", 00:22:06.193 "traddr": "10.0.0.2", 00:22:06.193 "adrfam": "ipv4", 00:22:06.193 "trsvcid": "4420", 00:22:06.193 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:06.193 "prchk_reftag": false, 00:22:06.193 "prchk_guard": false, 00:22:06.193 "hdgst": false, 00:22:06.193 "ddgst": false, 00:22:06.193 "dhchap_key": "key0", 00:22:06.193 "dhchap_ctrlr_key": "key1", 00:22:06.193 "allow_unrecognized_csi": false, 00:22:06.193 "method": "bdev_nvme_attach_controller", 00:22:06.193 "req_id": 1 00:22:06.193 } 00:22:06.193 Got JSON-RPC error response 00:22:06.193 response: 00:22:06.193 { 00:22:06.193 "code": -5, 00:22:06.193 "message": "Input/output error" 00:22:06.193 } 00:22:06.193 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.193 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.193 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.193 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.193 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:06.193 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.193 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.453 nvme0n1 00:22:06.453 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:06.453 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:06.453 16:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.453 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.453 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.453 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.711 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:22:06.711 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.711 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.711 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.711 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:06.711 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:06.711 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:07.649 nvme0n1 00:22:07.649 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:07.649 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.649 16:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.649 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.907 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:22:07.907 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: --dhchap-ctrl-secret DHHC-1:03:NzE1ZDI3MWI2MWY1OTc1ODI1MjI0YjBjNDFlZmY3MzNkNmU3MTQwMDgzY2ZjMDA3ODQwODhjNTY5MTQyMmFkMnT75Nk=: 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.474 16:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.474 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:08.474 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.474 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:08.474 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:08.474 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.475 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:08.475 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.475 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:08.475 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.475 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:09.041 request: 00:22:09.041 { 00:22:09.041 "name": "nvme0", 00:22:09.041 "trtype": "tcp", 00:22:09.041 "traddr": "10.0.0.2", 00:22:09.041 "adrfam": "ipv4", 00:22:09.041 "trsvcid": "4420", 00:22:09.041 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:22:09.041 "prchk_reftag": false, 00:22:09.041 "prchk_guard": false, 00:22:09.041 "hdgst": false, 00:22:09.041 "ddgst": false, 00:22:09.041 "dhchap_key": "key1", 00:22:09.041 "allow_unrecognized_csi": false, 00:22:09.041 "method": "bdev_nvme_attach_controller", 00:22:09.041 "req_id": 1 00:22:09.041 } 00:22:09.041 Got JSON-RPC error response 00:22:09.041 response: 00:22:09.041 { 00:22:09.041 "code": -5, 00:22:09.041 "message": "Input/output error" 00:22:09.041 } 00:22:09.041 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:09.041 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.041 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.041 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.041 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.041 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.041 16:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.607 nvme0n1 00:22:09.607 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:09.607 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:09.607 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.865 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.865 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.865 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.123 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:10.123 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.123 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.123 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.123 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:10.123 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.123 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.123 nvme0n1 00:22:10.382 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:10.382 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:10.382 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.382 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.382 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.382 16:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: '' 2s 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: ]] 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGNkZGI3MmExOTI5ZDgyMTYyOWIxYThmYWMxMDgzNzYax0PI: 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:10.642 16:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:12.549 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:12.549 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:12.549 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:12.549 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:12.549 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:12.549 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:12.549 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: 2s 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: ]] 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWZjZGQyMDZlZjBlYjI2ODgxMzJjYWFjMTA0NGJjZTYwZWU4M2I4ZjgwN2I4YmRiN2XjcA==: 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:12.550 16:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.084 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.344 nvme0n1 00:22:15.344 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.344 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.344 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.344 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.344 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.344 16:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:15.914 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:16.173 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:16.174 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.174 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.433 16:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.692 request: 00:22:16.692 { 00:22:16.692 "name": "nvme0", 00:22:16.692 "dhchap_key": "key1", 00:22:16.692 "dhchap_ctrlr_key": "key3", 00:22:16.692 "method": "bdev_nvme_set_keys", 00:22:16.692 "req_id": 1 00:22:16.692 } 00:22:16.692 Got JSON-RPC error response 00:22:16.692 response: 00:22:16.692 { 00:22:16.692 "code": -13, 00:22:16.692 "message": "Permission denied" 00:22:16.692 } 00:22:16.692 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:16.692 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.692 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.692 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.692 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:16.692 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.692 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:16.951 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:16.951 16:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:17.888 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:17.888 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:17.888 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.147 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:18.148 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.148 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.148 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.148 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.148 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.148 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.148 16:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.715 nvme0n1 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:18.715 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.284 request: 00:22:19.284 { 00:22:19.284 "name": "nvme0", 00:22:19.284 "dhchap_key": "key2", 00:22:19.284 "dhchap_ctrlr_key": "key0", 00:22:19.284 "method": "bdev_nvme_set_keys", 00:22:19.284 "req_id": 1 00:22:19.284 } 00:22:19.284 Got JSON-RPC error response 00:22:19.284 response: 00:22:19.284 { 00:22:19.284 "code": -13, 00:22:19.284 "message": "Permission denied" 00:22:19.284 } 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:19.284 16:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:20.663 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:20.663 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:20.663 16:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2228538 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2228538 ']' 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2228538 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2228538 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2228538' 00:22:20.663 killing process with pid 2228538 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2228538 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2228538 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.663 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.922 rmmod nvme_tcp 00:22:20.922 rmmod nvme_fabrics 00:22:20.922 rmmod nvme_keyring 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2254342 ']' 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2254342 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2254342 ']' 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2254342 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2254342 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2254342' 00:22:20.922 killing process with pid 2254342 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2254342 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2254342 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.922 16:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.f5v /tmp/spdk.key-sha256.utG /tmp/spdk.key-sha384.RRD /tmp/spdk.key-sha512.Vy8 /tmp/spdk.key-sha512.olj /tmp/spdk.key-sha384.VaI /tmp/spdk.key-sha256.TII '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:23.463 00:22:23.463 real 2m17.629s 00:22:23.463 user 5m9.135s 00:22:23.463 sys 0m19.782s 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.463 ************************************ 00:22:23.463 END TEST nvmf_auth_target 00:22:23.463 ************************************ 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.463 ************************************ 00:22:23.463 START TEST nvmf_bdevio_no_huge 00:22:23.463 ************************************ 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:23.463 * Looking for test storage... 00:22:23.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:23.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.463 --rc genhtml_branch_coverage=1 00:22:23.463 --rc genhtml_function_coverage=1 00:22:23.463 --rc genhtml_legend=1 00:22:23.463 --rc geninfo_all_blocks=1 00:22:23.463 --rc geninfo_unexecuted_blocks=1 00:22:23.463 00:22:23.463 ' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:23.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.463 --rc genhtml_branch_coverage=1 00:22:23.463 --rc genhtml_function_coverage=1 00:22:23.463 --rc genhtml_legend=1 00:22:23.463 --rc geninfo_all_blocks=1 00:22:23.463 --rc geninfo_unexecuted_blocks=1 00:22:23.463 00:22:23.463 ' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:23.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.463 --rc genhtml_branch_coverage=1 00:22:23.463 --rc genhtml_function_coverage=1 00:22:23.463 --rc genhtml_legend=1 00:22:23.463 --rc geninfo_all_blocks=1 00:22:23.463 --rc geninfo_unexecuted_blocks=1 00:22:23.463 00:22:23.463 ' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:23.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.463 --rc genhtml_branch_coverage=1 00:22:23.463 --rc genhtml_function_coverage=1 00:22:23.463 --rc genhtml_legend=1 00:22:23.463 --rc geninfo_all_blocks=1 00:22:23.463 --rc geninfo_unexecuted_blocks=1 00:22:23.463 00:22:23.463 ' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.463 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.464 16:49:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:28.763 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:28.763 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:28.763 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:28.764 Found net devices under 0000:31:00.0: cvl_0_0 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:28.764 Found net devices under 0000:31:00.1: cvl_0_1 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.764 16:49:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:22:28.764 00:22:28.764 --- 10.0.0.2 ping statistics --- 00:22:28.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.764 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:22:28.764 00:22:28.764 --- 10.0.0.1 ping statistics --- 00:22:28.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.764 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2262805 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2262805 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2262805 ']' 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.764 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:28.764 [2024-12-06 16:49:17.182609] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:22:28.764 [2024-12-06 16:49:17.182663] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:28.764 [2024-12-06 16:49:17.270244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.764 [2024-12-06 16:49:17.307337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.764 [2024-12-06 16:49:17.307365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.764 [2024-12-06 16:49:17.307373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.764 [2024-12-06 16:49:17.307382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.764 [2024-12-06 16:49:17.307388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.764 [2024-12-06 16:49:17.308721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:28.764 [2024-12-06 16:49:17.308868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:28.764 [2024-12-06 16:49:17.308980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:28.764 [2024-12-06 16:49:17.308978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.333 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:29.333 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:29.333 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:29.333 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.333 16:49:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.333 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.333 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.333 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.334 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.334 [2024-12-06 16:49:18.008859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.334 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.334 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:29.334 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.334 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.594 Malloc0 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.594 [2024-12-06 16:49:18.046970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:29.594 { 00:22:29.594 "params": { 00:22:29.594 "name": "Nvme$subsystem", 00:22:29.594 "trtype": "$TEST_TRANSPORT", 00:22:29.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:29.594 "adrfam": "ipv4", 00:22:29.594 "trsvcid": "$NVMF_PORT", 00:22:29.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:29.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:29.594 "hdgst": ${hdgst:-false}, 00:22:29.594 "ddgst": ${ddgst:-false} 00:22:29.594 }, 00:22:29.594 "method": "bdev_nvme_attach_controller" 00:22:29.594 } 00:22:29.594 EOF 00:22:29.594 )") 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:29.594 16:49:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:29.594 "params": { 00:22:29.594 "name": "Nvme1", 00:22:29.594 "trtype": "tcp", 00:22:29.594 "traddr": "10.0.0.2", 00:22:29.594 "adrfam": "ipv4", 00:22:29.594 "trsvcid": "4420", 00:22:29.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.594 "hdgst": false, 00:22:29.594 "ddgst": false 00:22:29.594 }, 00:22:29.594 "method": "bdev_nvme_attach_controller" 00:22:29.594 }' 00:22:29.594 [2024-12-06 16:49:18.086485] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:22:29.594 [2024-12-06 16:49:18.086556] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2262976 ] 00:22:29.594 [2024-12-06 16:49:18.171720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:29.594 [2024-12-06 16:49:18.218739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.594 [2024-12-06 16:49:18.218908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.594 [2024-12-06 16:49:18.218910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.853 I/O targets: 00:22:29.854 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:29.854 00:22:29.854 00:22:29.854 CUnit - A unit testing framework for C - Version 2.1-3 00:22:29.854 http://cunit.sourceforge.net/ 00:22:29.854 00:22:29.854 00:22:29.854 Suite: bdevio tests on: Nvme1n1 00:22:30.114 Test: blockdev write read block ...passed 00:22:30.114 Test: blockdev write zeroes read block ...passed 00:22:30.114 Test: blockdev write zeroes read no split ...passed 00:22:30.114 Test: blockdev write zeroes read split ...passed 00:22:30.114 Test: blockdev write zeroes read split partial ...passed 00:22:30.114 Test: blockdev reset ...[2024-12-06 16:49:18.652450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:30.114 [2024-12-06 16:49:18.652517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1985000 (9): Bad file descriptor 00:22:30.114 [2024-12-06 16:49:18.711647] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:30.114 passed 00:22:30.114 Test: blockdev write read 8 blocks ...passed 00:22:30.114 Test: blockdev write read size > 128k ...passed 00:22:30.114 Test: blockdev write read invalid size ...passed 00:22:30.114 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:30.114 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:30.114 Test: blockdev write read max offset ...passed 00:22:30.374 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:30.374 Test: blockdev writev readv 8 blocks ...passed 00:22:30.374 Test: blockdev writev readv 30 x 1block ...passed 00:22:30.374 Test: blockdev writev readv block ...passed 00:22:30.374 Test: blockdev writev readv size > 128k ...passed 00:22:30.374 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:30.374 Test: blockdev comparev and writev ...[2024-12-06 16:49:19.017339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.017381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:30.374 [2024-12-06 16:49:19.017398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.017407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:30.374 [2024-12-06 16:49:19.017848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.017859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:30.374 [2024-12-06 16:49:19.017873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.017882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:30.374 [2024-12-06 16:49:19.018364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.018375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:30.374 [2024-12-06 16:49:19.018389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.018397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:30.374 [2024-12-06 16:49:19.018859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.018871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:30.374 [2024-12-06 16:49:19.018885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:30.374 [2024-12-06 16:49:19.018893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:30.374 passed 00:22:30.634 Test: blockdev nvme passthru rw ...passed 00:22:30.634 Test: blockdev nvme passthru vendor specific ...[2024-12-06 16:49:19.103952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:30.634 [2024-12-06 16:49:19.103966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:30.634 [2024-12-06 16:49:19.104312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:30.634 [2024-12-06 16:49:19.104322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:30.634 [2024-12-06 16:49:19.104695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:30.634 [2024-12-06 16:49:19.104705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:30.634 [2024-12-06 16:49:19.105037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:30.634 [2024-12-06 16:49:19.105048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:30.634 passed 00:22:30.634 Test: blockdev nvme admin passthru ...passed 00:22:30.634 Test: blockdev copy ...passed 00:22:30.634 00:22:30.634 Run Summary: Type Total Ran Passed Failed Inactive 00:22:30.634 suites 1 1 n/a 0 0 00:22:30.634 tests 23 23 23 0 0 00:22:30.634 asserts 152 152 152 0 n/a 00:22:30.634 00:22:30.634 Elapsed time = 1.285 seconds 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:30.895 rmmod nvme_tcp 00:22:30.895 rmmod nvme_fabrics 00:22:30.895 rmmod nvme_keyring 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2262805 ']' 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2262805 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2262805 ']' 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2262805 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2262805 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2262805' 00:22:30.895 killing process with pid 2262805 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2262805 00:22:30.895 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2262805 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.154 16:49:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:33.707 00:22:33.707 real 0m10.155s 00:22:33.707 user 0m13.406s 00:22:33.707 sys 0m4.973s 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:33.707 ************************************ 00:22:33.707 END TEST nvmf_bdevio_no_huge 00:22:33.707 ************************************ 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:33.707 ************************************ 00:22:33.707 START TEST nvmf_tls 00:22:33.707 ************************************ 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:33.707 * Looking for test storage... 00:22:33.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.707 --rc genhtml_branch_coverage=1 00:22:33.707 --rc genhtml_function_coverage=1 00:22:33.707 --rc genhtml_legend=1 00:22:33.707 --rc geninfo_all_blocks=1 00:22:33.707 --rc geninfo_unexecuted_blocks=1 00:22:33.707 00:22:33.707 ' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.707 --rc genhtml_branch_coverage=1 00:22:33.707 --rc genhtml_function_coverage=1 00:22:33.707 --rc genhtml_legend=1 00:22:33.707 --rc geninfo_all_blocks=1 00:22:33.707 --rc geninfo_unexecuted_blocks=1 00:22:33.707 00:22:33.707 ' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.707 --rc genhtml_branch_coverage=1 00:22:33.707 --rc genhtml_function_coverage=1 00:22:33.707 --rc genhtml_legend=1 00:22:33.707 --rc geninfo_all_blocks=1 00:22:33.707 --rc geninfo_unexecuted_blocks=1 00:22:33.707 00:22:33.707 ' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:33.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:33.707 --rc genhtml_branch_coverage=1 00:22:33.707 --rc genhtml_function_coverage=1 00:22:33.707 --rc genhtml_legend=1 00:22:33.707 --rc geninfo_all_blocks=1 00:22:33.707 --rc geninfo_unexecuted_blocks=1 00:22:33.707 00:22:33.707 ' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.707 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:33.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:33.708 16:49:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.978 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.978 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:38.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:38.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:38.979 Found net devices under 0000:31:00.0: cvl_0_0 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:38.979 Found net devices under 0000:31:00.1: cvl_0_1 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:38.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:22:38.979 00:22:38.979 --- 10.0.0.2 ping statistics --- 00:22:38.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.979 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:22:38.979 00:22:38.979 --- 10.0.0.1 ping statistics --- 00:22:38.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.979 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:38.979 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2267691 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2267691 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2267691 ']' 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.980 16:49:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.980 [2024-12-06 16:49:27.522002] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:22:38.980 [2024-12-06 16:49:27.522068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.980 [2024-12-06 16:49:27.616174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.980 [2024-12-06 16:49:27.642900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.980 [2024-12-06 16:49:27.642950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.980 [2024-12-06 16:49:27.642959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.980 [2024-12-06 16:49:27.642966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.980 [2024-12-06 16:49:27.642972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.980 [2024-12-06 16:49:27.643732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:39.918 true 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:39.918 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.177 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:40.177 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:40.177 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:40.177 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.177 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:40.436 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:40.436 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:40.436 16:49:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:40.695 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.695 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:40.695 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:40.695 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:40.695 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.695 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:40.956 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:40.956 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:40.956 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:40.956 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:40.956 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:41.214 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:41.214 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:41.214 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:41.473 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.473 16:49:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:41.473 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:41.473 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.DtTbt8SnhP 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.pXIjZfiuNP 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.DtTbt8SnhP 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.pXIjZfiuNP 00:22:41.474 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:41.733 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:41.991 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.DtTbt8SnhP 00:22:41.991 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DtTbt8SnhP 00:22:41.991 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:41.991 [2024-12-06 16:49:30.652503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.991 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:42.250 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:42.508 [2024-12-06 16:49:30.973268] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:42.508 [2024-12-06 16:49:30.973482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.508 16:49:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:42.508 malloc0 00:22:42.508 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:42.767 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DtTbt8SnhP 00:22:43.026 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.026 16:49:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.DtTbt8SnhP 00:22:53.128 Initializing NVMe Controllers 00:22:53.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:53.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:53.128 Initialization complete. Launching workers. 00:22:53.128 ======================================================== 00:22:53.128 Latency(us) 00:22:53.128 Device Information : IOPS MiB/s Average min max 00:22:53.128 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18903.06 73.84 3385.91 1004.12 3990.57 00:22:53.128 ======================================================== 00:22:53.128 Total : 18903.06 73.84 3385.91 1004.12 3990.57 00:22:53.128 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DtTbt8SnhP 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DtTbt8SnhP 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2270893 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2270893 /var/tmp/bdevperf.sock 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2270893 ']' 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.128 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.128 [2024-12-06 16:49:41.755977] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:22:53.128 [2024-12-06 16:49:41.756032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270893 ] 00:22:53.388 [2024-12-06 16:49:41.833555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.388 [2024-12-06 16:49:41.851332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.388 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.388 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:53.389 16:49:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DtTbt8SnhP 00:22:53.389 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.648 [2024-12-06 16:49:42.220304] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.648 TLSTESTn1 00:22:53.648 16:49:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.907 Running I/O for 10 seconds... 00:22:55.784 4256.00 IOPS, 16.62 MiB/s [2024-12-06T15:49:45.414Z] 3878.00 IOPS, 15.15 MiB/s [2024-12-06T15:49:46.790Z] 3784.33 IOPS, 14.78 MiB/s [2024-12-06T15:49:47.729Z] 3862.25 IOPS, 15.09 MiB/s [2024-12-06T15:49:48.667Z] 4019.00 IOPS, 15.70 MiB/s [2024-12-06T15:49:49.608Z] 3920.17 IOPS, 15.31 MiB/s [2024-12-06T15:49:50.548Z] 3927.29 IOPS, 15.34 MiB/s [2024-12-06T15:49:51.486Z] 4007.88 IOPS, 15.66 MiB/s [2024-12-06T15:49:52.424Z] 4143.89 IOPS, 16.19 MiB/s [2024-12-06T15:49:52.684Z] 4138.90 IOPS, 16.17 MiB/s 00:23:03.991 Latency(us) 00:23:03.991 [2024-12-06T15:49:52.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.991 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.991 Verification LBA range: start 0x0 length 0x2000 00:23:03.991 TLSTESTn1 : 10.06 4128.20 16.13 0.00 0.00 30901.70 6171.31 89565.87 00:23:03.991 [2024-12-06T15:49:52.684Z] =================================================================================================================== 00:23:03.991 [2024-12-06T15:49:52.684Z] Total : 4128.20 16.13 0.00 0.00 30901.70 6171.31 89565.87 00:23:03.991 { 00:23:03.991 "results": [ 00:23:03.991 { 00:23:03.991 "job": "TLSTESTn1", 00:23:03.991 "core_mask": "0x4", 00:23:03.991 "workload": "verify", 00:23:03.991 "status": "finished", 00:23:03.991 "verify_range": { 00:23:03.991 "start": 0, 00:23:03.991 "length": 8192 00:23:03.991 }, 00:23:03.991 "queue_depth": 128, 00:23:03.991 "io_size": 4096, 00:23:03.991 "runtime": 10.056679, 00:23:03.991 "iops": 4128.201765214938, 00:23:03.991 "mibps": 16.12578814537085, 00:23:03.991 "io_failed": 0, 00:23:03.991 "io_timeout": 0, 00:23:03.991 "avg_latency_us": 30901.695623855867, 00:23:03.991 "min_latency_us": 6171.306666666666, 00:23:03.991 "max_latency_us": 89565.86666666667 00:23:03.991 } 00:23:03.991 ], 00:23:03.991 "core_count": 1 00:23:03.991 } 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2270893 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2270893 ']' 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2270893 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2270893 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2270893' 00:23:03.991 killing process with pid 2270893 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2270893 00:23:03.991 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.991 00:23:03.991 Latency(us) 00:23:03.991 [2024-12-06T15:49:52.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.991 [2024-12-06T15:49:52.684Z] =================================================================================================================== 00:23:03.991 [2024-12-06T15:49:52.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2270893 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pXIjZfiuNP 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pXIjZfiuNP 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pXIjZfiuNP 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pXIjZfiuNP 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2273217 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2273217 /var/tmp/bdevperf.sock 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2273217 ']' 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.991 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.991 [2024-12-06 16:49:52.657026] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:03.991 [2024-12-06 16:49:52.657081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273217 ] 00:23:04.250 [2024-12-06 16:49:52.720948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.250 [2024-12-06 16:49:52.735887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.250 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.250 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.250 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pXIjZfiuNP 00:23:04.510 16:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.510 [2024-12-06 16:49:53.101030] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.510 [2024-12-06 16:49:53.111223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:04.510 [2024-12-06 16:49:53.112221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e72f0 (107): Transport endpoint is not connected 00:23:04.510 [2024-12-06 16:49:53.113217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e72f0 (9): Bad file descriptor 00:23:04.510 [2024-12-06 16:49:53.114219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:04.510 [2024-12-06 16:49:53.114226] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:04.510 [2024-12-06 16:49:53.114232] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:04.510 [2024-12-06 16:49:53.114240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:04.510 request: 00:23:04.510 { 00:23:04.510 "name": "TLSTEST", 00:23:04.510 "trtype": "tcp", 00:23:04.510 "traddr": "10.0.0.2", 00:23:04.510 "adrfam": "ipv4", 00:23:04.510 "trsvcid": "4420", 00:23:04.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:04.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:04.510 "prchk_reftag": false, 00:23:04.510 "prchk_guard": false, 00:23:04.510 "hdgst": false, 00:23:04.510 "ddgst": false, 00:23:04.510 "psk": "key0", 00:23:04.510 "allow_unrecognized_csi": false, 00:23:04.510 "method": "bdev_nvme_attach_controller", 00:23:04.510 "req_id": 1 00:23:04.510 } 00:23:04.510 Got JSON-RPC error response 00:23:04.510 response: 00:23:04.510 { 00:23:04.510 "code": -5, 00:23:04.510 "message": "Input/output error" 00:23:04.510 } 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2273217 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2273217 ']' 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2273217 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273217 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:04.510 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273217' 00:23:04.510 killing process with pid 2273217 00:23:04.511 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2273217 00:23:04.511 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.511 00:23:04.511 Latency(us) 00:23:04.511 [2024-12-06T15:49:53.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.511 [2024-12-06T15:49:53.204Z] =================================================================================================================== 00:23:04.511 [2024-12-06T15:49:53.204Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:04.511 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2273217 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DtTbt8SnhP 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DtTbt8SnhP 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DtTbt8SnhP 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DtTbt8SnhP 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2273244 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2273244 /var/tmp/bdevperf.sock 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2273244 ']' 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.771 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.772 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.772 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.772 [2024-12-06 16:49:53.301837] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:04.772 [2024-12-06 16:49:53.301892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273244 ] 00:23:04.772 [2024-12-06 16:49:53.365728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.772 [2024-12-06 16:49:53.380147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.772 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.772 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.772 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DtTbt8SnhP 00:23:05.032 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:05.291 [2024-12-06 16:49:53.745054] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.291 [2024-12-06 16:49:53.752686] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:05.291 [2024-12-06 16:49:53.752707] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:05.291 [2024-12-06 16:49:53.752726] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:05.291 [2024-12-06 16:49:53.753228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152f2f0 (107): Transport endpoint is not connected 00:23:05.291 [2024-12-06 16:49:53.754223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152f2f0 (9): Bad file descriptor 00:23:05.291 [2024-12-06 16:49:53.755225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:05.291 [2024-12-06 16:49:53.755233] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:05.291 [2024-12-06 16:49:53.755238] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:05.291 [2024-12-06 16:49:53.755246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:05.291 request: 00:23:05.291 { 00:23:05.291 "name": "TLSTEST", 00:23:05.291 "trtype": "tcp", 00:23:05.291 "traddr": "10.0.0.2", 00:23:05.291 "adrfam": "ipv4", 00:23:05.291 "trsvcid": "4420", 00:23:05.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:05.292 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:05.292 "prchk_reftag": false, 00:23:05.292 "prchk_guard": false, 00:23:05.292 "hdgst": false, 00:23:05.292 "ddgst": false, 00:23:05.292 "psk": "key0", 00:23:05.292 "allow_unrecognized_csi": false, 00:23:05.292 "method": "bdev_nvme_attach_controller", 00:23:05.292 "req_id": 1 00:23:05.292 } 00:23:05.292 Got JSON-RPC error response 00:23:05.292 response: 00:23:05.292 { 00:23:05.292 "code": -5, 00:23:05.292 "message": "Input/output error" 00:23:05.292 } 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2273244 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2273244 ']' 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2273244 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273244 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273244' 00:23:05.292 killing process with pid 2273244 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2273244 00:23:05.292 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.292 00:23:05.292 Latency(us) 00:23:05.292 [2024-12-06T15:49:53.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.292 [2024-12-06T15:49:53.985Z] =================================================================================================================== 00:23:05.292 [2024-12-06T15:49:53.985Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2273244 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DtTbt8SnhP 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DtTbt8SnhP 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DtTbt8SnhP 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DtTbt8SnhP 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2273407 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2273407 /var/tmp/bdevperf.sock 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2273407 ']' 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.292 16:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.292 [2024-12-06 16:49:53.944060] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:05.292 [2024-12-06 16:49:53.944121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273407 ] 00:23:05.552 [2024-12-06 16:49:54.009196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.552 [2024-12-06 16:49:54.024593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.552 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:05.552 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:05.552 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DtTbt8SnhP 00:23:05.812 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:05.812 [2024-12-06 16:49:54.389497] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.812 [2024-12-06 16:49:54.394220] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:05.812 [2024-12-06 16:49:54.394238] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:05.812 [2024-12-06 16:49:54.394257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:05.812 [2024-12-06 16:49:54.394735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2c2f0 (107): Transport endpoint is not connected 00:23:05.812 [2024-12-06 16:49:54.395730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2c2f0 (9): Bad file descriptor 00:23:05.812 [2024-12-06 16:49:54.396731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:05.812 [2024-12-06 16:49:54.396738] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:05.812 [2024-12-06 16:49:54.396744] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:05.812 [2024-12-06 16:49:54.396752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:05.812 request: 00:23:05.812 { 00:23:05.812 "name": "TLSTEST", 00:23:05.812 "trtype": "tcp", 00:23:05.812 "traddr": "10.0.0.2", 00:23:05.812 "adrfam": "ipv4", 00:23:05.812 "trsvcid": "4420", 00:23:05.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:05.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:05.813 "prchk_reftag": false, 00:23:05.813 "prchk_guard": false, 00:23:05.813 "hdgst": false, 00:23:05.813 "ddgst": false, 00:23:05.813 "psk": "key0", 00:23:05.813 "allow_unrecognized_csi": false, 00:23:05.813 "method": "bdev_nvme_attach_controller", 00:23:05.813 "req_id": 1 00:23:05.813 } 00:23:05.813 Got JSON-RPC error response 00:23:05.813 response: 00:23:05.813 { 00:23:05.813 "code": -5, 00:23:05.813 "message": "Input/output error" 00:23:05.813 } 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2273407 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2273407 ']' 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2273407 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273407 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273407' 00:23:05.813 killing process with pid 2273407 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2273407 00:23:05.813 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.813 00:23:05.813 Latency(us) 00:23:05.813 [2024-12-06T15:49:54.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.813 [2024-12-06T15:49:54.506Z] =================================================================================================================== 00:23:05.813 [2024-12-06T15:49:54.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:05.813 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2273407 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2273592 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2273592 /var/tmp/bdevperf.sock 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2273592 ']' 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.074 [2024-12-06 16:49:54.580803] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:06.074 [2024-12-06 16:49:54.580857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273592 ] 00:23:06.074 [2024-12-06 16:49:54.644771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.074 [2024-12-06 16:49:54.659570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.074 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:06.334 [2024-12-06 16:49:54.864039] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:06.334 [2024-12-06 16:49:54.864067] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:06.334 request: 00:23:06.334 { 00:23:06.334 "name": "key0", 00:23:06.334 "path": "", 00:23:06.334 "method": "keyring_file_add_key", 00:23:06.334 "req_id": 1 00:23:06.334 } 00:23:06.334 Got JSON-RPC error response 00:23:06.334 response: 00:23:06.334 { 00:23:06.334 "code": -1, 00:23:06.334 "message": "Operation not permitted" 00:23:06.334 } 00:23:06.334 16:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.334 [2024-12-06 16:49:55.024522] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.334 [2024-12-06 16:49:55.024546] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:06.594 request: 00:23:06.594 { 00:23:06.594 "name": "TLSTEST", 00:23:06.594 "trtype": "tcp", 00:23:06.594 "traddr": "10.0.0.2", 00:23:06.594 "adrfam": "ipv4", 00:23:06.594 "trsvcid": "4420", 00:23:06.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.594 "prchk_reftag": false, 00:23:06.594 "prchk_guard": false, 00:23:06.594 "hdgst": false, 00:23:06.594 "ddgst": false, 00:23:06.594 "psk": "key0", 00:23:06.594 "allow_unrecognized_csi": false, 00:23:06.594 "method": "bdev_nvme_attach_controller", 00:23:06.594 "req_id": 1 00:23:06.594 } 00:23:06.594 Got JSON-RPC error response 00:23:06.594 response: 00:23:06.594 { 00:23:06.594 "code": -126, 00:23:06.594 "message": "Required key not available" 00:23:06.594 } 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2273592 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2273592 ']' 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2273592 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273592 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273592' 00:23:06.594 killing process with pid 2273592 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2273592 00:23:06.594 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.594 00:23:06.594 Latency(us) 00:23:06.594 [2024-12-06T15:49:55.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.594 [2024-12-06T15:49:55.287Z] =================================================================================================================== 00:23:06.594 [2024-12-06T15:49:55.287Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2273592 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2267691 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2267691 ']' 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2267691 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267691 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267691' 00:23:06.594 killing process with pid 2267691 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2267691 00:23:06.594 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2267691 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.G0KVrez7L7 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.G0KVrez7L7 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2273729 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2273729 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2273729 ']' 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.853 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.853 [2024-12-06 16:49:55.416230] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:06.853 [2024-12-06 16:49:55.416289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.853 [2024-12-06 16:49:55.487850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.853 [2024-12-06 16:49:55.503543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.853 [2024-12-06 16:49:55.503573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.853 [2024-12-06 16:49:55.503578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.853 [2024-12-06 16:49:55.503583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.853 [2024-12-06 16:49:55.503588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.853 [2024-12-06 16:49:55.504091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.G0KVrez7L7 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.G0KVrez7L7 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.111 [2024-12-06 16:49:55.738818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.111 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.368 16:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:07.368 [2024-12-06 16:49:56.047571] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.368 [2024-12-06 16:49:56.047775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.626 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:07.626 malloc0 00:23:07.626 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.885 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:07.885 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0KVrez7L7 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.G0KVrez7L7 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2273981 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2273981 /var/tmp/bdevperf.sock 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2273981 ']' 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.143 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.144 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.144 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.144 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.144 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.144 [2024-12-06 16:49:56.711597] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:08.144 [2024-12-06 16:49:56.711648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273981 ] 00:23:08.144 [2024-12-06 16:49:56.775860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.144 [2024-12-06 16:49:56.792139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.402 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.402 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.402 16:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:08.402 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.661 [2024-12-06 16:49:57.145263] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.661 TLSTESTn1 00:23:08.661 16:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:08.661 Running I/O for 10 seconds... 00:23:10.968 4975.00 IOPS, 19.43 MiB/s [2024-12-06T15:50:00.593Z] 5207.00 IOPS, 20.34 MiB/s [2024-12-06T15:50:01.529Z] 5019.00 IOPS, 19.61 MiB/s [2024-12-06T15:50:02.464Z] 4918.25 IOPS, 19.21 MiB/s [2024-12-06T15:50:03.400Z] 4906.80 IOPS, 19.17 MiB/s [2024-12-06T15:50:04.337Z] 4920.17 IOPS, 19.22 MiB/s [2024-12-06T15:50:05.748Z] 4802.14 IOPS, 18.76 MiB/s [2024-12-06T15:50:06.684Z] 4718.25 IOPS, 18.43 MiB/s [2024-12-06T15:50:07.622Z] 4701.33 IOPS, 18.36 MiB/s [2024-12-06T15:50:07.622Z] 4739.20 IOPS, 18.51 MiB/s 00:23:18.929 Latency(us) 00:23:18.929 [2024-12-06T15:50:07.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.929 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.929 Verification LBA range: start 0x0 length 0x2000 00:23:18.929 TLSTESTn1 : 10.06 4724.45 18.45 0.00 0.00 27000.42 4287.15 58545.49 00:23:18.929 [2024-12-06T15:50:07.622Z] =================================================================================================================== 00:23:18.929 [2024-12-06T15:50:07.622Z] Total : 4724.45 18.45 0.00 0.00 27000.42 4287.15 58545.49 00:23:18.929 { 00:23:18.929 "results": [ 00:23:18.929 { 00:23:18.929 "job": "TLSTESTn1", 00:23:18.929 "core_mask": "0x4", 00:23:18.929 "workload": "verify", 00:23:18.929 "status": "finished", 00:23:18.929 "verify_range": { 00:23:18.929 "start": 0, 00:23:18.929 "length": 8192 00:23:18.929 }, 00:23:18.929 "queue_depth": 128, 00:23:18.929 "io_size": 4096, 00:23:18.929 "runtime": 10.058315, 00:23:18.929 "iops": 4724.44937347856, 00:23:18.929 "mibps": 18.454880365150625, 00:23:18.929 "io_failed": 0, 00:23:18.929 "io_timeout": 0, 00:23:18.929 "avg_latency_us": 27000.415102132436, 00:23:18.929 "min_latency_us": 4287.1466666666665, 00:23:18.929 "max_latency_us": 58545.49333333333 00:23:18.929 } 00:23:18.929 ], 00:23:18.929 "core_count": 1 00:23:18.929 } 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2273981 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2273981 ']' 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2273981 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273981 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273981' 00:23:18.929 killing process with pid 2273981 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2273981 00:23:18.929 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.929 00:23:18.929 Latency(us) 00:23:18.929 [2024-12-06T15:50:07.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.929 [2024-12-06T15:50:07.622Z] =================================================================================================================== 00:23:18.929 [2024-12-06T15:50:07.622Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2273981 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.G0KVrez7L7 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0KVrez7L7 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0KVrez7L7 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0KVrez7L7 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.G0KVrez7L7 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2276364 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2276364 /var/tmp/bdevperf.sock 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2276364 ']' 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.929 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.929 [2024-12-06 16:50:07.579961] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:18.929 [2024-12-06 16:50:07.580017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2276364 ] 00:23:19.189 [2024-12-06 16:50:07.644705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.189 [2024-12-06 16:50:07.659912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.189 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.189 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.189 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:19.189 [2024-12-06 16:50:07.864401] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.G0KVrez7L7': 0100666 00:23:19.189 [2024-12-06 16:50:07.864426] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:19.189 request: 00:23:19.189 { 00:23:19.189 "name": "key0", 00:23:19.189 "path": "/tmp/tmp.G0KVrez7L7", 00:23:19.189 "method": "keyring_file_add_key", 00:23:19.189 "req_id": 1 00:23:19.189 } 00:23:19.189 Got JSON-RPC error response 00:23:19.189 response: 00:23:19.189 { 00:23:19.189 "code": -1, 00:23:19.189 "message": "Operation not permitted" 00:23:19.189 } 00:23:19.448 16:50:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.448 [2024-12-06 16:50:08.024869] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.448 [2024-12-06 16:50:08.024886] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:19.448 request: 00:23:19.448 { 00:23:19.448 "name": "TLSTEST", 00:23:19.448 "trtype": "tcp", 00:23:19.448 "traddr": "10.0.0.2", 00:23:19.448 "adrfam": "ipv4", 00:23:19.448 "trsvcid": "4420", 00:23:19.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.448 "prchk_reftag": false, 00:23:19.448 "prchk_guard": false, 00:23:19.448 "hdgst": false, 00:23:19.448 "ddgst": false, 00:23:19.448 "psk": "key0", 00:23:19.448 "allow_unrecognized_csi": false, 00:23:19.448 "method": "bdev_nvme_attach_controller", 00:23:19.448 "req_id": 1 00:23:19.448 } 00:23:19.448 Got JSON-RPC error response 00:23:19.448 response: 00:23:19.448 { 00:23:19.448 "code": -126, 00:23:19.448 "message": "Required key not available" 00:23:19.448 } 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2276364 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2276364 ']' 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2276364 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2276364 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2276364' 00:23:19.448 killing process with pid 2276364 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2276364 00:23:19.448 Received shutdown signal, test time was about 10.000000 seconds 00:23:19.448 00:23:19.448 Latency(us) 00:23:19.448 [2024-12-06T15:50:08.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.448 [2024-12-06T15:50:08.141Z] =================================================================================================================== 00:23:19.448 [2024-12-06T15:50:08.141Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:19.448 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2276364 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2273729 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2273729 ']' 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2273729 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2273729 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2273729' 00:23:19.707 killing process with pid 2273729 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2273729 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2273729 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2276653 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2276653 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2276653 ']' 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.707 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.707 [2024-12-06 16:50:08.367964] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:19.707 [2024-12-06 16:50:08.368018] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.968 [2024-12-06 16:50:08.437904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.968 [2024-12-06 16:50:08.452351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.968 [2024-12-06 16:50:08.452382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.968 [2024-12-06 16:50:08.452388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.968 [2024-12-06 16:50:08.452392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.968 [2024-12-06 16:50:08.452396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.968 [2024-12-06 16:50:08.452885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.G0KVrez7L7 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.G0KVrez7L7 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.G0KVrez7L7 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.G0KVrez7L7 00:23:19.968 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:20.228 [2024-12-06 16:50:08.690519] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.228 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:20.228 16:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:20.487 [2024-12-06 16:50:09.003279] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.487 [2024-12-06 16:50:09.003479] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.487 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:20.487 malloc0 00:23:20.487 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:20.746 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:21.006 [2024-12-06 16:50:09.478290] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.G0KVrez7L7': 0100666 00:23:21.006 [2024-12-06 16:50:09.478308] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:21.006 request: 00:23:21.006 { 00:23:21.006 "name": "key0", 00:23:21.006 "path": "/tmp/tmp.G0KVrez7L7", 00:23:21.006 "method": "keyring_file_add_key", 00:23:21.006 "req_id": 1 00:23:21.006 } 00:23:21.006 Got JSON-RPC error response 00:23:21.006 response: 00:23:21.006 { 00:23:21.006 "code": -1, 00:23:21.006 "message": "Operation not permitted" 00:23:21.006 } 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.006 [2024-12-06 16:50:09.634690] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:21.006 [2024-12-06 16:50:09.634713] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:21.006 request: 00:23:21.006 { 00:23:21.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.006 "host": "nqn.2016-06.io.spdk:host1", 00:23:21.006 "psk": "key0", 00:23:21.006 "method": "nvmf_subsystem_add_host", 00:23:21.006 "req_id": 1 00:23:21.006 } 00:23:21.006 Got JSON-RPC error response 00:23:21.006 response: 00:23:21.006 { 00:23:21.006 "code": -32603, 00:23:21.006 "message": "Internal error" 00:23:21.006 } 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2276653 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2276653 ']' 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2276653 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.006 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2276653 00:23:21.265 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:21.265 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:21.265 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2276653' 00:23:21.265 killing process with pid 2276653 00:23:21.265 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2276653 00:23:21.265 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2276653 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.G0KVrez7L7 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2277015 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2277015 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2277015 ']' 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.266 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:21.266 [2024-12-06 16:50:09.846135] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:21.266 [2024-12-06 16:50:09.846187] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.266 [2024-12-06 16:50:09.917419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.266 [2024-12-06 16:50:09.931718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.266 [2024-12-06 16:50:09.931747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.266 [2024-12-06 16:50:09.931753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.266 [2024-12-06 16:50:09.931758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.266 [2024-12-06 16:50:09.931763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.266 [2024-12-06 16:50:09.932216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.525 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.525 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:21.525 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.525 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.525 16:50:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.525 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.525 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.G0KVrez7L7 00:23:21.526 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.G0KVrez7L7 00:23:21.526 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.526 [2024-12-06 16:50:10.161686] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.526 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:21.785 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:21.785 [2024-12-06 16:50:10.474444] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.785 [2024-12-06 16:50:10.474642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.045 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:22.045 malloc0 00:23:22.045 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.306 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:22.306 16:50:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2277378 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2277378 /var/tmp/bdevperf.sock 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2277378 ']' 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.565 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.565 [2024-12-06 16:50:11.153715] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:22.565 [2024-12-06 16:50:11.153771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277378 ] 00:23:22.565 [2024-12-06 16:50:11.218404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.565 [2024-12-06 16:50:11.234491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.824 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.824 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.824 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:22.824 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:23.082 [2024-12-06 16:50:11.595409] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:23.083 TLSTESTn1 00:23:23.083 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:23.342 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:23.342 "subsystems": [ 00:23:23.342 { 00:23:23.342 "subsystem": "keyring", 00:23:23.342 "config": [ 00:23:23.342 { 00:23:23.342 "method": "keyring_file_add_key", 00:23:23.342 "params": { 00:23:23.342 "name": "key0", 00:23:23.342 "path": "/tmp/tmp.G0KVrez7L7" 00:23:23.342 } 00:23:23.342 } 00:23:23.342 ] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "iobuf", 00:23:23.342 "config": [ 00:23:23.342 { 00:23:23.342 "method": "iobuf_set_options", 00:23:23.342 "params": { 00:23:23.342 "small_pool_count": 8192, 00:23:23.342 "large_pool_count": 1024, 00:23:23.342 "small_bufsize": 8192, 00:23:23.342 "large_bufsize": 135168, 00:23:23.342 "enable_numa": false 00:23:23.342 } 00:23:23.342 } 00:23:23.342 ] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "sock", 00:23:23.342 "config": [ 00:23:23.342 { 00:23:23.342 "method": "sock_set_default_impl", 00:23:23.342 "params": { 00:23:23.342 "impl_name": "posix" 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "sock_impl_set_options", 00:23:23.342 "params": { 00:23:23.342 "impl_name": "ssl", 00:23:23.342 "recv_buf_size": 4096, 00:23:23.342 "send_buf_size": 4096, 00:23:23.342 "enable_recv_pipe": true, 00:23:23.342 "enable_quickack": false, 00:23:23.342 "enable_placement_id": 0, 00:23:23.342 "enable_zerocopy_send_server": true, 00:23:23.342 "enable_zerocopy_send_client": false, 00:23:23.342 "zerocopy_threshold": 0, 00:23:23.342 "tls_version": 0, 00:23:23.342 "enable_ktls": false 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "sock_impl_set_options", 00:23:23.342 "params": { 00:23:23.342 "impl_name": "posix", 00:23:23.342 "recv_buf_size": 2097152, 00:23:23.342 "send_buf_size": 2097152, 00:23:23.342 "enable_recv_pipe": true, 00:23:23.342 "enable_quickack": false, 00:23:23.342 "enable_placement_id": 0, 00:23:23.342 "enable_zerocopy_send_server": true, 00:23:23.342 "enable_zerocopy_send_client": false, 00:23:23.342 "zerocopy_threshold": 0, 00:23:23.342 "tls_version": 0, 00:23:23.342 "enable_ktls": false 00:23:23.342 } 00:23:23.342 } 00:23:23.342 ] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "vmd", 00:23:23.342 "config": [] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "accel", 00:23:23.342 "config": [ 00:23:23.342 { 00:23:23.342 "method": "accel_set_options", 00:23:23.342 "params": { 00:23:23.342 "small_cache_size": 128, 00:23:23.342 "large_cache_size": 16, 00:23:23.342 "task_count": 2048, 00:23:23.342 "sequence_count": 2048, 00:23:23.342 "buf_count": 2048 00:23:23.342 } 00:23:23.342 } 00:23:23.342 ] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "bdev", 00:23:23.342 "config": [ 00:23:23.342 { 00:23:23.342 "method": "bdev_set_options", 00:23:23.342 "params": { 00:23:23.342 "bdev_io_pool_size": 65535, 00:23:23.342 "bdev_io_cache_size": 256, 00:23:23.342 "bdev_auto_examine": true, 00:23:23.342 "iobuf_small_cache_size": 128, 00:23:23.342 "iobuf_large_cache_size": 16 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "bdev_raid_set_options", 00:23:23.342 "params": { 00:23:23.342 "process_window_size_kb": 1024, 00:23:23.342 "process_max_bandwidth_mb_sec": 0 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "bdev_iscsi_set_options", 00:23:23.342 "params": { 00:23:23.342 "timeout_sec": 30 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "bdev_nvme_set_options", 00:23:23.342 "params": { 00:23:23.342 "action_on_timeout": "none", 00:23:23.342 "timeout_us": 0, 00:23:23.342 "timeout_admin_us": 0, 00:23:23.342 "keep_alive_timeout_ms": 10000, 00:23:23.342 "arbitration_burst": 0, 00:23:23.342 "low_priority_weight": 0, 00:23:23.342 "medium_priority_weight": 0, 00:23:23.342 "high_priority_weight": 0, 00:23:23.342 "nvme_adminq_poll_period_us": 10000, 00:23:23.342 "nvme_ioq_poll_period_us": 0, 00:23:23.342 "io_queue_requests": 0, 00:23:23.342 "delay_cmd_submit": true, 00:23:23.342 "transport_retry_count": 4, 00:23:23.342 "bdev_retry_count": 3, 00:23:23.342 "transport_ack_timeout": 0, 00:23:23.342 "ctrlr_loss_timeout_sec": 0, 00:23:23.342 "reconnect_delay_sec": 0, 00:23:23.342 "fast_io_fail_timeout_sec": 0, 00:23:23.342 "disable_auto_failback": false, 00:23:23.342 "generate_uuids": false, 00:23:23.342 "transport_tos": 0, 00:23:23.342 "nvme_error_stat": false, 00:23:23.342 "rdma_srq_size": 0, 00:23:23.342 "io_path_stat": false, 00:23:23.342 "allow_accel_sequence": false, 00:23:23.342 "rdma_max_cq_size": 0, 00:23:23.342 "rdma_cm_event_timeout_ms": 0, 00:23:23.342 "dhchap_digests": [ 00:23:23.342 "sha256", 00:23:23.342 "sha384", 00:23:23.342 "sha512" 00:23:23.342 ], 00:23:23.342 "dhchap_dhgroups": [ 00:23:23.342 "null", 00:23:23.342 "ffdhe2048", 00:23:23.342 "ffdhe3072", 00:23:23.342 "ffdhe4096", 00:23:23.342 "ffdhe6144", 00:23:23.342 "ffdhe8192" 00:23:23.342 ] 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "bdev_nvme_set_hotplug", 00:23:23.342 "params": { 00:23:23.342 "period_us": 100000, 00:23:23.342 "enable": false 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "bdev_malloc_create", 00:23:23.342 "params": { 00:23:23.342 "name": "malloc0", 00:23:23.342 "num_blocks": 8192, 00:23:23.342 "block_size": 4096, 00:23:23.342 "physical_block_size": 4096, 00:23:23.342 "uuid": "08c1bdf3-8bc1-4b5e-ba93-4fdd930704a4", 00:23:23.342 "optimal_io_boundary": 0, 00:23:23.342 "md_size": 0, 00:23:23.342 "dif_type": 0, 00:23:23.342 "dif_is_head_of_md": false, 00:23:23.342 "dif_pi_format": 0 00:23:23.342 } 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "method": "bdev_wait_for_examine" 00:23:23.342 } 00:23:23.342 ] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "nbd", 00:23:23.342 "config": [] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "scheduler", 00:23:23.342 "config": [ 00:23:23.342 { 00:23:23.342 "method": "framework_set_scheduler", 00:23:23.342 "params": { 00:23:23.342 "name": "static" 00:23:23.342 } 00:23:23.342 } 00:23:23.342 ] 00:23:23.342 }, 00:23:23.342 { 00:23:23.342 "subsystem": "nvmf", 00:23:23.342 "config": [ 00:23:23.342 { 00:23:23.342 "method": "nvmf_set_config", 00:23:23.343 "params": { 00:23:23.343 "discovery_filter": "match_any", 00:23:23.343 "admin_cmd_passthru": { 00:23:23.343 "identify_ctrlr": false 00:23:23.343 }, 00:23:23.343 "dhchap_digests": [ 00:23:23.343 "sha256", 00:23:23.343 "sha384", 00:23:23.343 "sha512" 00:23:23.343 ], 00:23:23.343 "dhchap_dhgroups": [ 00:23:23.343 "null", 00:23:23.343 "ffdhe2048", 00:23:23.343 "ffdhe3072", 00:23:23.343 "ffdhe4096", 00:23:23.343 "ffdhe6144", 00:23:23.343 "ffdhe8192" 00:23:23.343 ] 00:23:23.343 } 00:23:23.343 }, 00:23:23.343 { 00:23:23.343 "method": "nvmf_set_max_subsystems", 00:23:23.343 "params": { 00:23:23.343 "max_subsystems": 1024 00:23:23.343 } 00:23:23.343 }, 00:23:23.343 { 00:23:23.343 "method": "nvmf_set_crdt", 00:23:23.343 "params": { 00:23:23.343 "crdt1": 0, 00:23:23.343 "crdt2": 0, 00:23:23.343 "crdt3": 0 00:23:23.343 } 00:23:23.343 }, 00:23:23.343 { 00:23:23.343 "method": "nvmf_create_transport", 00:23:23.343 "params": { 00:23:23.343 "trtype": "TCP", 00:23:23.343 "max_queue_depth": 128, 00:23:23.343 "max_io_qpairs_per_ctrlr": 127, 00:23:23.343 "in_capsule_data_size": 4096, 00:23:23.343 "max_io_size": 131072, 00:23:23.343 "io_unit_size": 131072, 00:23:23.343 "max_aq_depth": 128, 00:23:23.343 "num_shared_buffers": 511, 00:23:23.343 "buf_cache_size": 4294967295, 00:23:23.343 "dif_insert_or_strip": false, 00:23:23.343 "zcopy": false, 00:23:23.343 "c2h_success": false, 00:23:23.343 "sock_priority": 0, 00:23:23.343 "abort_timeout_sec": 1, 00:23:23.343 "ack_timeout": 0, 00:23:23.343 "data_wr_pool_size": 0 00:23:23.343 } 00:23:23.343 }, 00:23:23.343 { 00:23:23.343 "method": "nvmf_create_subsystem", 00:23:23.343 "params": { 00:23:23.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.343 "allow_any_host": false, 00:23:23.343 "serial_number": "SPDK00000000000001", 00:23:23.343 "model_number": "SPDK bdev Controller", 00:23:23.343 "max_namespaces": 10, 00:23:23.343 "min_cntlid": 1, 00:23:23.343 "max_cntlid": 65519, 00:23:23.343 "ana_reporting": false 00:23:23.343 } 00:23:23.343 }, 00:23:23.343 { 00:23:23.343 "method": "nvmf_subsystem_add_host", 00:23:23.343 "params": { 00:23:23.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.343 "host": "nqn.2016-06.io.spdk:host1", 00:23:23.343 "psk": "key0" 00:23:23.343 } 00:23:23.343 }, 00:23:23.343 { 00:23:23.343 "method": "nvmf_subsystem_add_ns", 00:23:23.343 "params": { 00:23:23.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.343 "namespace": { 00:23:23.343 "nsid": 1, 00:23:23.343 "bdev_name": "malloc0", 00:23:23.343 "nguid": "08C1BDF38BC14B5EBA934FDD930704A4", 00:23:23.343 "uuid": "08c1bdf3-8bc1-4b5e-ba93-4fdd930704a4", 00:23:23.343 "no_auto_visible": false 00:23:23.343 } 00:23:23.343 } 00:23:23.343 }, 00:23:23.343 { 00:23:23.343 "method": "nvmf_subsystem_add_listener", 00:23:23.343 "params": { 00:23:23.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.343 "listen_address": { 00:23:23.343 "trtype": "TCP", 00:23:23.343 "adrfam": "IPv4", 00:23:23.343 "traddr": "10.0.0.2", 00:23:23.343 "trsvcid": "4420" 00:23:23.343 }, 00:23:23.343 "secure_channel": true 00:23:23.343 } 00:23:23.343 } 00:23:23.343 ] 00:23:23.343 } 00:23:23.343 ] 00:23:23.343 }' 00:23:23.343 16:50:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:23.604 "subsystems": [ 00:23:23.604 { 00:23:23.604 "subsystem": "keyring", 00:23:23.604 "config": [ 00:23:23.604 { 00:23:23.604 "method": "keyring_file_add_key", 00:23:23.604 "params": { 00:23:23.604 "name": "key0", 00:23:23.604 "path": "/tmp/tmp.G0KVrez7L7" 00:23:23.604 } 00:23:23.604 } 00:23:23.604 ] 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "subsystem": "iobuf", 00:23:23.604 "config": [ 00:23:23.604 { 00:23:23.604 "method": "iobuf_set_options", 00:23:23.604 "params": { 00:23:23.604 "small_pool_count": 8192, 00:23:23.604 "large_pool_count": 1024, 00:23:23.604 "small_bufsize": 8192, 00:23:23.604 "large_bufsize": 135168, 00:23:23.604 "enable_numa": false 00:23:23.604 } 00:23:23.604 } 00:23:23.604 ] 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "subsystem": "sock", 00:23:23.604 "config": [ 00:23:23.604 { 00:23:23.604 "method": "sock_set_default_impl", 00:23:23.604 "params": { 00:23:23.604 "impl_name": "posix" 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "sock_impl_set_options", 00:23:23.604 "params": { 00:23:23.604 "impl_name": "ssl", 00:23:23.604 "recv_buf_size": 4096, 00:23:23.604 "send_buf_size": 4096, 00:23:23.604 "enable_recv_pipe": true, 00:23:23.604 "enable_quickack": false, 00:23:23.604 "enable_placement_id": 0, 00:23:23.604 "enable_zerocopy_send_server": true, 00:23:23.604 "enable_zerocopy_send_client": false, 00:23:23.604 "zerocopy_threshold": 0, 00:23:23.604 "tls_version": 0, 00:23:23.604 "enable_ktls": false 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "sock_impl_set_options", 00:23:23.604 "params": { 00:23:23.604 "impl_name": "posix", 00:23:23.604 "recv_buf_size": 2097152, 00:23:23.604 "send_buf_size": 2097152, 00:23:23.604 "enable_recv_pipe": true, 00:23:23.604 "enable_quickack": false, 00:23:23.604 "enable_placement_id": 0, 00:23:23.604 "enable_zerocopy_send_server": true, 00:23:23.604 "enable_zerocopy_send_client": false, 00:23:23.604 "zerocopy_threshold": 0, 00:23:23.604 "tls_version": 0, 00:23:23.604 "enable_ktls": false 00:23:23.604 } 00:23:23.604 } 00:23:23.604 ] 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "subsystem": "vmd", 00:23:23.604 "config": [] 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "subsystem": "accel", 00:23:23.604 "config": [ 00:23:23.604 { 00:23:23.604 "method": "accel_set_options", 00:23:23.604 "params": { 00:23:23.604 "small_cache_size": 128, 00:23:23.604 "large_cache_size": 16, 00:23:23.604 "task_count": 2048, 00:23:23.604 "sequence_count": 2048, 00:23:23.604 "buf_count": 2048 00:23:23.604 } 00:23:23.604 } 00:23:23.604 ] 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "subsystem": "bdev", 00:23:23.604 "config": [ 00:23:23.604 { 00:23:23.604 "method": "bdev_set_options", 00:23:23.604 "params": { 00:23:23.604 "bdev_io_pool_size": 65535, 00:23:23.604 "bdev_io_cache_size": 256, 00:23:23.604 "bdev_auto_examine": true, 00:23:23.604 "iobuf_small_cache_size": 128, 00:23:23.604 "iobuf_large_cache_size": 16 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "bdev_raid_set_options", 00:23:23.604 "params": { 00:23:23.604 "process_window_size_kb": 1024, 00:23:23.604 "process_max_bandwidth_mb_sec": 0 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "bdev_iscsi_set_options", 00:23:23.604 "params": { 00:23:23.604 "timeout_sec": 30 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "bdev_nvme_set_options", 00:23:23.604 "params": { 00:23:23.604 "action_on_timeout": "none", 00:23:23.604 "timeout_us": 0, 00:23:23.604 "timeout_admin_us": 0, 00:23:23.604 "keep_alive_timeout_ms": 10000, 00:23:23.604 "arbitration_burst": 0, 00:23:23.604 "low_priority_weight": 0, 00:23:23.604 "medium_priority_weight": 0, 00:23:23.604 "high_priority_weight": 0, 00:23:23.604 "nvme_adminq_poll_period_us": 10000, 00:23:23.604 "nvme_ioq_poll_period_us": 0, 00:23:23.604 "io_queue_requests": 512, 00:23:23.604 "delay_cmd_submit": true, 00:23:23.604 "transport_retry_count": 4, 00:23:23.604 "bdev_retry_count": 3, 00:23:23.604 "transport_ack_timeout": 0, 00:23:23.604 "ctrlr_loss_timeout_sec": 0, 00:23:23.604 "reconnect_delay_sec": 0, 00:23:23.604 "fast_io_fail_timeout_sec": 0, 00:23:23.604 "disable_auto_failback": false, 00:23:23.604 "generate_uuids": false, 00:23:23.604 "transport_tos": 0, 00:23:23.604 "nvme_error_stat": false, 00:23:23.604 "rdma_srq_size": 0, 00:23:23.604 "io_path_stat": false, 00:23:23.604 "allow_accel_sequence": false, 00:23:23.604 "rdma_max_cq_size": 0, 00:23:23.604 "rdma_cm_event_timeout_ms": 0, 00:23:23.604 "dhchap_digests": [ 00:23:23.604 "sha256", 00:23:23.604 "sha384", 00:23:23.604 "sha512" 00:23:23.604 ], 00:23:23.604 "dhchap_dhgroups": [ 00:23:23.604 "null", 00:23:23.604 "ffdhe2048", 00:23:23.604 "ffdhe3072", 00:23:23.604 "ffdhe4096", 00:23:23.604 "ffdhe6144", 00:23:23.604 "ffdhe8192" 00:23:23.604 ] 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "bdev_nvme_attach_controller", 00:23:23.604 "params": { 00:23:23.604 "name": "TLSTEST", 00:23:23.604 "trtype": "TCP", 00:23:23.604 "adrfam": "IPv4", 00:23:23.604 "traddr": "10.0.0.2", 00:23:23.604 "trsvcid": "4420", 00:23:23.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.604 "prchk_reftag": false, 00:23:23.604 "prchk_guard": false, 00:23:23.604 "ctrlr_loss_timeout_sec": 0, 00:23:23.604 "reconnect_delay_sec": 0, 00:23:23.604 "fast_io_fail_timeout_sec": 0, 00:23:23.604 "psk": "key0", 00:23:23.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.604 "hdgst": false, 00:23:23.604 "ddgst": false, 00:23:23.604 "multipath": "multipath" 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "bdev_nvme_set_hotplug", 00:23:23.604 "params": { 00:23:23.604 "period_us": 100000, 00:23:23.604 "enable": false 00:23:23.604 } 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "method": "bdev_wait_for_examine" 00:23:23.604 } 00:23:23.604 ] 00:23:23.604 }, 00:23:23.604 { 00:23:23.604 "subsystem": "nbd", 00:23:23.604 "config": [] 00:23:23.604 } 00:23:23.604 ] 00:23:23.604 }' 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2277378 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2277378 ']' 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2277378 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277378 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277378' 00:23:23.604 killing process with pid 2277378 00:23:23.604 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2277378 00:23:23.604 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.604 00:23:23.604 Latency(us) 00:23:23.604 [2024-12-06T15:50:12.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.605 [2024-12-06T15:50:12.298Z] =================================================================================================================== 00:23:23.605 [2024-12-06T15:50:12.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.605 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2277378 00:23:23.605 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2277015 00:23:23.605 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2277015 ']' 00:23:23.605 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2277015 00:23:23.605 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:23.605 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.605 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277015 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277015' 00:23:23.865 killing process with pid 2277015 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2277015 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2277015 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.865 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:23.865 "subsystems": [ 00:23:23.865 { 00:23:23.865 "subsystem": "keyring", 00:23:23.865 "config": [ 00:23:23.865 { 00:23:23.865 "method": "keyring_file_add_key", 00:23:23.865 "params": { 00:23:23.865 "name": "key0", 00:23:23.865 "path": "/tmp/tmp.G0KVrez7L7" 00:23:23.865 } 00:23:23.865 } 00:23:23.865 ] 00:23:23.865 }, 00:23:23.865 { 00:23:23.865 "subsystem": "iobuf", 00:23:23.865 "config": [ 00:23:23.865 { 00:23:23.865 "method": "iobuf_set_options", 00:23:23.865 "params": { 00:23:23.865 "small_pool_count": 8192, 00:23:23.865 "large_pool_count": 1024, 00:23:23.865 "small_bufsize": 8192, 00:23:23.865 "large_bufsize": 135168, 00:23:23.865 "enable_numa": false 00:23:23.865 } 00:23:23.865 } 00:23:23.865 ] 00:23:23.865 }, 00:23:23.865 { 00:23:23.865 "subsystem": "sock", 00:23:23.865 "config": [ 00:23:23.865 { 00:23:23.865 "method": "sock_set_default_impl", 00:23:23.865 "params": { 00:23:23.865 "impl_name": "posix" 00:23:23.865 } 00:23:23.865 }, 00:23:23.865 { 00:23:23.865 "method": "sock_impl_set_options", 00:23:23.865 "params": { 00:23:23.865 "impl_name": "ssl", 00:23:23.865 "recv_buf_size": 4096, 00:23:23.865 "send_buf_size": 4096, 00:23:23.865 "enable_recv_pipe": true, 00:23:23.865 "enable_quickack": false, 00:23:23.865 "enable_placement_id": 0, 00:23:23.865 "enable_zerocopy_send_server": true, 00:23:23.865 "enable_zerocopy_send_client": false, 00:23:23.865 "zerocopy_threshold": 0, 00:23:23.865 "tls_version": 0, 00:23:23.865 "enable_ktls": false 00:23:23.865 } 00:23:23.865 }, 00:23:23.865 { 00:23:23.865 "method": "sock_impl_set_options", 00:23:23.865 "params": { 00:23:23.865 "impl_name": "posix", 00:23:23.865 "recv_buf_size": 2097152, 00:23:23.865 "send_buf_size": 2097152, 00:23:23.865 "enable_recv_pipe": true, 00:23:23.865 "enable_quickack": false, 00:23:23.865 "enable_placement_id": 0, 00:23:23.865 "enable_zerocopy_send_server": true, 00:23:23.865 "enable_zerocopy_send_client": false, 00:23:23.865 "zerocopy_threshold": 0, 00:23:23.865 "tls_version": 0, 00:23:23.865 "enable_ktls": false 00:23:23.865 } 00:23:23.866 } 00:23:23.866 ] 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "subsystem": "vmd", 00:23:23.866 "config": [] 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "subsystem": "accel", 00:23:23.866 "config": [ 00:23:23.866 { 00:23:23.866 "method": "accel_set_options", 00:23:23.866 "params": { 00:23:23.866 "small_cache_size": 128, 00:23:23.866 "large_cache_size": 16, 00:23:23.866 "task_count": 2048, 00:23:23.866 "sequence_count": 2048, 00:23:23.866 "buf_count": 2048 00:23:23.866 } 00:23:23.866 } 00:23:23.866 ] 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "subsystem": "bdev", 00:23:23.866 "config": [ 00:23:23.866 { 00:23:23.866 "method": "bdev_set_options", 00:23:23.866 "params": { 00:23:23.866 "bdev_io_pool_size": 65535, 00:23:23.866 "bdev_io_cache_size": 256, 00:23:23.866 "bdev_auto_examine": true, 00:23:23.866 "iobuf_small_cache_size": 128, 00:23:23.866 "iobuf_large_cache_size": 16 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "bdev_raid_set_options", 00:23:23.866 "params": { 00:23:23.866 "process_window_size_kb": 1024, 00:23:23.866 "process_max_bandwidth_mb_sec": 0 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "bdev_iscsi_set_options", 00:23:23.866 "params": { 00:23:23.866 "timeout_sec": 30 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "bdev_nvme_set_options", 00:23:23.866 "params": { 00:23:23.866 "action_on_timeout": "none", 00:23:23.866 "timeout_us": 0, 00:23:23.866 "timeout_admin_us": 0, 00:23:23.866 "keep_alive_timeout_ms": 10000, 00:23:23.866 "arbitration_burst": 0, 00:23:23.866 "low_priority_weight": 0, 00:23:23.866 "medium_priority_weight": 0, 00:23:23.866 "high_priority_weight": 0, 00:23:23.866 "nvme_adminq_poll_period_us": 10000, 00:23:23.866 "nvme_ioq_poll_period_us": 0, 00:23:23.866 "io_queue_requests": 0, 00:23:23.866 "delay_cmd_submit": true, 00:23:23.866 "transport_retry_count": 4, 00:23:23.866 "bdev_retry_count": 3, 00:23:23.866 "transport_ack_timeout": 0, 00:23:23.866 "ctrlr_loss_timeout_sec": 0, 00:23:23.866 "reconnect_delay_sec": 0, 00:23:23.866 "fast_io_fail_timeout_sec": 0, 00:23:23.866 "disable_auto_failback": false, 00:23:23.866 "generate_uuids": false, 00:23:23.866 "transport_tos": 0, 00:23:23.866 "nvme_error_stat": false, 00:23:23.866 "rdma_srq_size": 0, 00:23:23.866 "io_path_stat": false, 00:23:23.866 "allow_accel_sequence": false, 00:23:23.866 "rdma_max_cq_size": 0, 00:23:23.866 "rdma_cm_event_timeout_ms": 0, 00:23:23.866 "dhchap_digests": [ 00:23:23.866 "sha256", 00:23:23.866 "sha384", 00:23:23.866 "sha512" 00:23:23.866 ], 00:23:23.866 "dhchap_dhgroups": [ 00:23:23.866 "null", 00:23:23.866 "ffdhe2048", 00:23:23.866 "ffdhe3072", 00:23:23.866 "ffdhe4096", 00:23:23.866 "ffdhe6144", 00:23:23.866 "ffdhe8192" 00:23:23.866 ] 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "bdev_nvme_set_hotplug", 00:23:23.866 "params": { 00:23:23.866 "period_us": 100000, 00:23:23.866 "enable": false 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "bdev_malloc_create", 00:23:23.866 "params": { 00:23:23.866 "name": "malloc0", 00:23:23.866 "num_blocks": 8192, 00:23:23.866 "block_size": 4096, 00:23:23.866 "physical_block_size": 4096, 00:23:23.866 "uuid": "08c1bdf3-8bc1-4b5e-ba93-4fdd930704a4", 00:23:23.866 "optimal_io_boundary": 0, 00:23:23.866 "md_size": 0, 00:23:23.866 "dif_type": 0, 00:23:23.866 "dif_is_head_of_md": false, 00:23:23.866 "dif_pi_format": 0 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "bdev_wait_for_examine" 00:23:23.866 } 00:23:23.866 ] 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "subsystem": "nbd", 00:23:23.866 "config": [] 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "subsystem": "scheduler", 00:23:23.866 "config": [ 00:23:23.866 { 00:23:23.866 "method": "framework_set_scheduler", 00:23:23.866 "params": { 00:23:23.866 "name": "static" 00:23:23.866 } 00:23:23.866 } 00:23:23.866 ] 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "subsystem": "nvmf", 00:23:23.866 "config": [ 00:23:23.866 { 00:23:23.866 "method": "nvmf_set_config", 00:23:23.866 "params": { 00:23:23.866 "discovery_filter": "match_any", 00:23:23.866 "admin_cmd_passthru": { 00:23:23.866 "identify_ctrlr": false 00:23:23.866 }, 00:23:23.866 "dhchap_digests": [ 00:23:23.866 "sha256", 00:23:23.866 "sha384", 00:23:23.866 "sha512" 00:23:23.866 ], 00:23:23.866 "dhchap_dhgroups": [ 00:23:23.866 "null", 00:23:23.866 "ffdhe2048", 00:23:23.866 "ffdhe3072", 00:23:23.866 "ffdhe4096", 00:23:23.866 "ffdhe6144", 00:23:23.866 "ffdhe8192" 00:23:23.866 ] 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "nvmf_set_max_subsystems", 00:23:23.866 "params": { 00:23:23.866 "max_subsystems": 1024 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "nvmf_set_crdt", 00:23:23.866 "params": { 00:23:23.866 "crdt1": 0, 00:23:23.866 "crdt2": 0, 00:23:23.866 "crdt3": 0 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "nvmf_create_transport", 00:23:23.866 "params": { 00:23:23.866 "trtype": "TCP", 00:23:23.866 "max_queue_depth": 128, 00:23:23.866 "max_io_qpairs_per_ctrlr": 127, 00:23:23.866 "in_capsule_data_size": 4096, 00:23:23.866 "max_io_size": 131072, 00:23:23.866 "io_unit_size": 131072, 00:23:23.866 "max_aq_depth": 128, 00:23:23.866 "num_shared_buffers": 511, 00:23:23.866 "buf_cache_size": 4294967295, 00:23:23.866 "dif_insert_or_strip": false, 00:23:23.866 "zcopy": false, 00:23:23.866 "c2h_success": false, 00:23:23.866 "sock_priority": 0, 00:23:23.866 "abort_timeout_sec": 1, 00:23:23.866 "ack_timeout": 0, 00:23:23.866 "data_wr_pool_size": 0 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "nvmf_create_subsystem", 00:23:23.866 "params": { 00:23:23.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.866 "allow_any_host": false, 00:23:23.866 "serial_number": "SPDK00000000000001", 00:23:23.866 "model_number": "SPDK bdev Controller", 00:23:23.866 "max_namespaces": 10, 00:23:23.866 "min_cntlid": 1, 00:23:23.866 "max_cntlid": 65519, 00:23:23.866 "ana_reporting": false 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "nvmf_subsystem_add_host", 00:23:23.866 "params": { 00:23:23.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.866 "host": "nqn.2016-06.io.spdk:host1", 00:23:23.866 "psk": "key0" 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "nvmf_subsystem_add_ns", 00:23:23.866 "params": { 00:23:23.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.866 "namespace": { 00:23:23.866 "nsid": 1, 00:23:23.866 "bdev_name": "malloc0", 00:23:23.866 "nguid": "08C1BDF38BC14B5EBA934FDD930704A4", 00:23:23.866 "uuid": "08c1bdf3-8bc1-4b5e-ba93-4fdd930704a4", 00:23:23.866 "no_auto_visible": false 00:23:23.866 } 00:23:23.866 } 00:23:23.866 }, 00:23:23.866 { 00:23:23.866 "method": "nvmf_subsystem_add_listener", 00:23:23.866 "params": { 00:23:23.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.866 "listen_address": { 00:23:23.866 "trtype": "TCP", 00:23:23.866 "adrfam": "IPv4", 00:23:23.866 "traddr": "10.0.0.2", 00:23:23.866 "trsvcid": "4420" 00:23:23.866 }, 00:23:23.866 "secure_channel": true 00:23:23.866 } 00:23:23.866 } 00:23:23.866 ] 00:23:23.866 } 00:23:23.866 ] 00:23:23.866 }' 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2277678 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2277678 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2277678 ']' 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.867 16:50:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.867 [2024-12-06 16:50:12.461767] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:23.867 [2024-12-06 16:50:12.461809] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.867 [2024-12-06 16:50:12.521552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.867 [2024-12-06 16:50:12.536496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.867 [2024-12-06 16:50:12.536525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.867 [2024-12-06 16:50:12.536530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.867 [2024-12-06 16:50:12.536535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.867 [2024-12-06 16:50:12.536539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.867 [2024-12-06 16:50:12.537017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.126 [2024-12-06 16:50:12.725510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.127 [2024-12-06 16:50:12.757531] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.127 [2024-12-06 16:50:12.757735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2277757 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2277757 /var/tmp/bdevperf.sock 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2277757 ']' 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.697 16:50:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:24.697 "subsystems": [ 00:23:24.697 { 00:23:24.697 "subsystem": "keyring", 00:23:24.697 "config": [ 00:23:24.697 { 00:23:24.697 "method": "keyring_file_add_key", 00:23:24.697 "params": { 00:23:24.697 "name": "key0", 00:23:24.697 "path": "/tmp/tmp.G0KVrez7L7" 00:23:24.697 } 00:23:24.697 } 00:23:24.697 ] 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "subsystem": "iobuf", 00:23:24.697 "config": [ 00:23:24.697 { 00:23:24.697 "method": "iobuf_set_options", 00:23:24.697 "params": { 00:23:24.697 "small_pool_count": 8192, 00:23:24.697 "large_pool_count": 1024, 00:23:24.697 "small_bufsize": 8192, 00:23:24.697 "large_bufsize": 135168, 00:23:24.697 "enable_numa": false 00:23:24.697 } 00:23:24.697 } 00:23:24.697 ] 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "subsystem": "sock", 00:23:24.697 "config": [ 00:23:24.697 { 00:23:24.697 "method": "sock_set_default_impl", 00:23:24.697 "params": { 00:23:24.697 "impl_name": "posix" 00:23:24.697 } 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "method": "sock_impl_set_options", 00:23:24.697 "params": { 00:23:24.697 "impl_name": "ssl", 00:23:24.697 "recv_buf_size": 4096, 00:23:24.697 "send_buf_size": 4096, 00:23:24.697 "enable_recv_pipe": true, 00:23:24.697 "enable_quickack": false, 00:23:24.697 "enable_placement_id": 0, 00:23:24.697 "enable_zerocopy_send_server": true, 00:23:24.697 "enable_zerocopy_send_client": false, 00:23:24.697 "zerocopy_threshold": 0, 00:23:24.697 "tls_version": 0, 00:23:24.697 "enable_ktls": false 00:23:24.697 } 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "method": "sock_impl_set_options", 00:23:24.697 "params": { 00:23:24.697 "impl_name": "posix", 00:23:24.697 "recv_buf_size": 2097152, 00:23:24.697 "send_buf_size": 2097152, 00:23:24.697 "enable_recv_pipe": true, 00:23:24.697 "enable_quickack": false, 00:23:24.697 "enable_placement_id": 0, 00:23:24.697 "enable_zerocopy_send_server": true, 00:23:24.697 "enable_zerocopy_send_client": false, 00:23:24.697 "zerocopy_threshold": 0, 00:23:24.697 "tls_version": 0, 00:23:24.697 "enable_ktls": false 00:23:24.697 } 00:23:24.697 } 00:23:24.697 ] 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "subsystem": "vmd", 00:23:24.697 "config": [] 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "subsystem": "accel", 00:23:24.697 "config": [ 00:23:24.697 { 00:23:24.697 "method": "accel_set_options", 00:23:24.697 "params": { 00:23:24.697 "small_cache_size": 128, 00:23:24.697 "large_cache_size": 16, 00:23:24.697 "task_count": 2048, 00:23:24.697 "sequence_count": 2048, 00:23:24.697 "buf_count": 2048 00:23:24.697 } 00:23:24.697 } 00:23:24.697 ] 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "subsystem": "bdev", 00:23:24.697 "config": [ 00:23:24.697 { 00:23:24.697 "method": "bdev_set_options", 00:23:24.697 "params": { 00:23:24.697 "bdev_io_pool_size": 65535, 00:23:24.697 "bdev_io_cache_size": 256, 00:23:24.697 "bdev_auto_examine": true, 00:23:24.697 "iobuf_small_cache_size": 128, 00:23:24.697 "iobuf_large_cache_size": 16 00:23:24.697 } 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "method": "bdev_raid_set_options", 00:23:24.697 "params": { 00:23:24.697 "process_window_size_kb": 1024, 00:23:24.697 "process_max_bandwidth_mb_sec": 0 00:23:24.697 } 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "method": "bdev_iscsi_set_options", 00:23:24.697 "params": { 00:23:24.697 "timeout_sec": 30 00:23:24.697 } 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "method": "bdev_nvme_set_options", 00:23:24.697 "params": { 00:23:24.697 "action_on_timeout": "none", 00:23:24.697 "timeout_us": 0, 00:23:24.697 "timeout_admin_us": 0, 00:23:24.697 "keep_alive_timeout_ms": 10000, 00:23:24.697 "arbitration_burst": 0, 00:23:24.697 "low_priority_weight": 0, 00:23:24.697 "medium_priority_weight": 0, 00:23:24.697 "high_priority_weight": 0, 00:23:24.697 "nvme_adminq_poll_period_us": 10000, 00:23:24.697 "nvme_ioq_poll_period_us": 0, 00:23:24.697 "io_queue_requests": 512, 00:23:24.697 "delay_cmd_submit": true, 00:23:24.697 "transport_retry_count": 4, 00:23:24.697 "bdev_retry_count": 3, 00:23:24.697 "transport_ack_timeout": 0, 00:23:24.697 "ctrlr_loss_timeout_sec": 0, 00:23:24.697 "reconnect_delay_sec": 0, 00:23:24.697 "fast_io_fail_timeout_sec": 0, 00:23:24.697 "disable_auto_failback": false, 00:23:24.697 "generate_uuids": false, 00:23:24.697 "transport_tos": 0, 00:23:24.697 "nvme_error_stat": false, 00:23:24.697 "rdma_srq_size": 0, 00:23:24.697 "io_path_stat": false, 00:23:24.697 "allow_accel_sequence": false, 00:23:24.697 "rdma_max_cq_size": 0, 00:23:24.697 "rdma_cm_event_timeout_ms": 0, 00:23:24.697 "dhchap_digests": [ 00:23:24.697 "sha256", 00:23:24.697 "sha384", 00:23:24.697 "sha512" 00:23:24.697 ], 00:23:24.697 "dhchap_dhgroups": [ 00:23:24.697 "null", 00:23:24.697 "ffdhe2048", 00:23:24.697 "ffdhe3072", 00:23:24.697 "ffdhe4096", 00:23:24.697 "ffdhe6144", 00:23:24.697 "ffdhe8192" 00:23:24.697 ] 00:23:24.697 } 00:23:24.697 }, 00:23:24.697 { 00:23:24.697 "method": "bdev_nvme_attach_controller", 00:23:24.697 "params": { 00:23:24.697 "name": "TLSTEST", 00:23:24.697 "trtype": "TCP", 00:23:24.697 "adrfam": "IPv4", 00:23:24.698 "traddr": "10.0.0.2", 00:23:24.698 "trsvcid": "4420", 00:23:24.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.698 "prchk_reftag": false, 00:23:24.698 "prchk_guard": false, 00:23:24.698 "ctrlr_loss_timeout_sec": 0, 00:23:24.698 "reconnect_delay_sec": 0, 00:23:24.698 "fast_io_fail_timeout_sec": 0, 00:23:24.698 "psk": "key0", 00:23:24.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.698 "hdgst": false, 00:23:24.698 "ddgst": false, 00:23:24.698 "multipath": "multipath" 00:23:24.698 } 00:23:24.698 }, 00:23:24.698 { 00:23:24.698 "method": "bdev_nvme_set_hotplug", 00:23:24.698 "params": { 00:23:24.698 "period_us": 100000, 00:23:24.698 "enable": false 00:23:24.698 } 00:23:24.698 }, 00:23:24.698 { 00:23:24.698 "method": "bdev_wait_for_examine" 00:23:24.698 } 00:23:24.698 ] 00:23:24.698 }, 00:23:24.698 { 00:23:24.698 "subsystem": "nbd", 00:23:24.698 "config": [] 00:23:24.698 } 00:23:24.698 ] 00:23:24.698 }' 00:23:24.698 [2024-12-06 16:50:13.304050] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:24.698 [2024-12-06 16:50:13.304109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277757 ] 00:23:24.698 [2024-12-06 16:50:13.368492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.698 [2024-12-06 16:50:13.384855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.957 [2024-12-06 16:50:13.514703] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.526 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.526 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.526 16:50:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:25.526 Running I/O for 10 seconds... 00:23:27.837 5759.00 IOPS, 22.50 MiB/s [2024-12-06T15:50:17.481Z] 5588.50 IOPS, 21.83 MiB/s [2024-12-06T15:50:18.420Z] 5556.33 IOPS, 21.70 MiB/s [2024-12-06T15:50:19.358Z] 5412.00 IOPS, 21.14 MiB/s [2024-12-06T15:50:20.296Z] 5460.40 IOPS, 21.33 MiB/s [2024-12-06T15:50:21.233Z] 5294.33 IOPS, 20.68 MiB/s [2024-12-06T15:50:22.693Z] 5132.14 IOPS, 20.05 MiB/s [2024-12-06T15:50:23.318Z] 5068.88 IOPS, 19.80 MiB/s [2024-12-06T15:50:24.255Z] 5132.78 IOPS, 20.05 MiB/s [2024-12-06T15:50:24.256Z] 5029.50 IOPS, 19.65 MiB/s 00:23:35.563 Latency(us) 00:23:35.563 [2024-12-06T15:50:24.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.563 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:35.563 Verification LBA range: start 0x0 length 0x2000 00:23:35.563 TLSTESTn1 : 10.05 5016.16 19.59 0.00 0.00 25437.21 4587.52 49588.91 00:23:35.563 [2024-12-06T15:50:24.256Z] =================================================================================================================== 00:23:35.563 [2024-12-06T15:50:24.256Z] Total : 5016.16 19.59 0.00 0.00 25437.21 4587.52 49588.91 00:23:35.563 { 00:23:35.563 "results": [ 00:23:35.563 { 00:23:35.563 "job": "TLSTESTn1", 00:23:35.563 "core_mask": "0x4", 00:23:35.563 "workload": "verify", 00:23:35.563 "status": "finished", 00:23:35.563 "verify_range": { 00:23:35.563 "start": 0, 00:23:35.563 "length": 8192 00:23:35.563 }, 00:23:35.563 "queue_depth": 128, 00:23:35.563 "io_size": 4096, 00:23:35.563 "runtime": 10.052121, 00:23:35.563 "iops": 5016.155296976628, 00:23:35.563 "mibps": 19.594356628814953, 00:23:35.563 "io_failed": 0, 00:23:35.563 "io_timeout": 0, 00:23:35.563 "avg_latency_us": 25437.212505933137, 00:23:35.563 "min_latency_us": 4587.52, 00:23:35.563 "max_latency_us": 49588.90666666667 00:23:35.563 } 00:23:35.563 ], 00:23:35.563 "core_count": 1 00:23:35.563 } 00:23:35.563 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:35.563 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2277757 00:23:35.563 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2277757 ']' 00:23:35.563 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2277757 00:23:35.563 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277757 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277757' 00:23:35.822 killing process with pid 2277757 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2277757 00:23:35.822 Received shutdown signal, test time was about 10.000000 seconds 00:23:35.822 00:23:35.822 Latency(us) 00:23:35.822 [2024-12-06T15:50:24.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.822 [2024-12-06T15:50:24.515Z] =================================================================================================================== 00:23:35.822 [2024-12-06T15:50:24.515Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2277757 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2277678 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2277678 ']' 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2277678 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2277678 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2277678' 00:23:35.822 killing process with pid 2277678 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2277678 00:23:35.822 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2277678 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2280286 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2280286 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2280286 ']' 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:36.081 [2024-12-06 16:50:24.591047] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:36.081 [2024-12-06 16:50:24.591109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.081 [2024-12-06 16:50:24.673735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.081 [2024-12-06 16:50:24.690708] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.081 [2024-12-06 16:50:24.690744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.081 [2024-12-06 16:50:24.690752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.081 [2024-12-06 16:50:24.690759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.081 [2024-12-06 16:50:24.690765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.081 [2024-12-06 16:50:24.691375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.081 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.340 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.340 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.G0KVrez7L7 00:23:36.340 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.G0KVrez7L7 00:23:36.340 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:36.340 [2024-12-06 16:50:24.925340] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.340 16:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:36.598 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.598 [2024-12-06 16:50:25.238126] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.598 [2024-12-06 16:50:25.238361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.598 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.857 malloc0 00:23:36.857 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:37.116 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:37.116 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2280490 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2280490 /var/tmp/bdevperf.sock 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2280490 ']' 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.376 16:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.376 [2024-12-06 16:50:25.921797] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:37.376 [2024-12-06 16:50:25.921868] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280490 ] 00:23:37.376 [2024-12-06 16:50:25.991717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.376 [2024-12-06 16:50:26.013134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.635 16:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.635 16:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.635 16:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:37.635 16:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:37.893 [2024-12-06 16:50:26.383678] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:37.893 nvme0n1 00:23:37.893 16:50:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:37.893 Running I/O for 1 seconds... 00:23:39.270 4470.00 IOPS, 17.46 MiB/s 00:23:39.270 Latency(us) 00:23:39.270 [2024-12-06T15:50:27.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.270 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:39.270 Verification LBA range: start 0x0 length 0x2000 00:23:39.270 nvme0n1 : 1.04 4440.28 17.34 0.00 0.00 28377.06 4587.52 47841.28 00:23:39.270 [2024-12-06T15:50:27.963Z] =================================================================================================================== 00:23:39.270 [2024-12-06T15:50:27.963Z] Total : 4440.28 17.34 0.00 0.00 28377.06 4587.52 47841.28 00:23:39.270 { 00:23:39.270 "results": [ 00:23:39.270 { 00:23:39.270 "job": "nvme0n1", 00:23:39.270 "core_mask": "0x2", 00:23:39.270 "workload": "verify", 00:23:39.270 "status": "finished", 00:23:39.270 "verify_range": { 00:23:39.270 "start": 0, 00:23:39.270 "length": 8192 00:23:39.270 }, 00:23:39.270 "queue_depth": 128, 00:23:39.270 "io_size": 4096, 00:23:39.270 "runtime": 1.03552, 00:23:39.270 "iops": 4440.281211372065, 00:23:39.270 "mibps": 17.344848481922128, 00:23:39.270 "io_failed": 0, 00:23:39.270 "io_timeout": 0, 00:23:39.270 "avg_latency_us": 28377.057712048714, 00:23:39.270 "min_latency_us": 4587.52, 00:23:39.270 "max_latency_us": 47841.28 00:23:39.270 } 00:23:39.270 ], 00:23:39.270 "core_count": 1 00:23:39.270 } 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2280490 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2280490 ']' 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2280490 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2280490 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2280490' 00:23:39.270 killing process with pid 2280490 00:23:39.270 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2280490 00:23:39.270 Received shutdown signal, test time was about 1.000000 seconds 00:23:39.270 00:23:39.270 Latency(us) 00:23:39.270 [2024-12-06T15:50:27.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.270 [2024-12-06T15:50:27.964Z] =================================================================================================================== 00:23:39.271 [2024-12-06T15:50:27.964Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2280490 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2280286 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2280286 ']' 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2280286 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2280286 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2280286' 00:23:39.271 killing process with pid 2280286 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2280286 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2280286 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2281123 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2281123 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2281123 ']' 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.271 16:50:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:39.271 [2024-12-06 16:50:27.948400] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:39.271 [2024-12-06 16:50:27.948457] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.530 [2024-12-06 16:50:28.033636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.530 [2024-12-06 16:50:28.052793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.530 [2024-12-06 16:50:28.052840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.530 [2024-12-06 16:50:28.052848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.530 [2024-12-06 16:50:28.052855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.530 [2024-12-06 16:50:28.052861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.530 [2024-12-06 16:50:28.053572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.530 [2024-12-06 16:50:28.164540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.530 malloc0 00:23:39.530 [2024-12-06 16:50:28.191361] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:39.530 [2024-12-06 16:50:28.191600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2281151 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2281151 /var/tmp/bdevperf.sock 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2281151 ']' 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.530 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.788 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.788 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.788 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.788 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:39.788 [2024-12-06 16:50:28.253206] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:39.789 [2024-12-06 16:50:28.253255] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281151 ] 00:23:39.789 [2024-12-06 16:50:28.316732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.789 [2024-12-06 16:50:28.333718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.789 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.789 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.789 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G0KVrez7L7 00:23:40.047 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:40.047 [2024-12-06 16:50:28.691998] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.306 nvme0n1 00:23:40.306 16:50:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.306 Running I/O for 1 seconds... 00:23:41.243 3537.00 IOPS, 13.82 MiB/s 00:23:41.243 Latency(us) 00:23:41.243 [2024-12-06T15:50:29.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.243 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:41.243 Verification LBA range: start 0x0 length 0x2000 00:23:41.243 nvme0n1 : 1.03 3558.71 13.90 0.00 0.00 35421.58 5816.32 79080.11 00:23:41.243 [2024-12-06T15:50:29.936Z] =================================================================================================================== 00:23:41.243 [2024-12-06T15:50:29.936Z] Total : 3558.71 13.90 0.00 0.00 35421.58 5816.32 79080.11 00:23:41.243 { 00:23:41.243 "results": [ 00:23:41.243 { 00:23:41.243 "job": "nvme0n1", 00:23:41.243 "core_mask": "0x2", 00:23:41.243 "workload": "verify", 00:23:41.243 "status": "finished", 00:23:41.243 "verify_range": { 00:23:41.243 "start": 0, 00:23:41.243 "length": 8192 00:23:41.243 }, 00:23:41.243 "queue_depth": 128, 00:23:41.243 "io_size": 4096, 00:23:41.243 "runtime": 1.03015, 00:23:41.243 "iops": 3558.7050429549095, 00:23:41.243 "mibps": 13.901191574042615, 00:23:41.243 "io_failed": 0, 00:23:41.243 "io_timeout": 0, 00:23:41.243 "avg_latency_us": 35421.57710129114, 00:23:41.243 "min_latency_us": 5816.32, 00:23:41.243 "max_latency_us": 79080.10666666667 00:23:41.243 } 00:23:41.243 ], 00:23:41.243 "core_count": 1 00:23:41.243 } 00:23:41.243 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:41.243 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.243 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.528 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.528 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:41.528 "subsystems": [ 00:23:41.528 { 00:23:41.528 "subsystem": "keyring", 00:23:41.528 "config": [ 00:23:41.528 { 00:23:41.528 "method": "keyring_file_add_key", 00:23:41.528 "params": { 00:23:41.528 "name": "key0", 00:23:41.528 "path": "/tmp/tmp.G0KVrez7L7" 00:23:41.528 } 00:23:41.528 } 00:23:41.528 ] 00:23:41.528 }, 00:23:41.528 { 00:23:41.528 "subsystem": "iobuf", 00:23:41.528 "config": [ 00:23:41.528 { 00:23:41.528 "method": "iobuf_set_options", 00:23:41.528 "params": { 00:23:41.528 "small_pool_count": 8192, 00:23:41.528 "large_pool_count": 1024, 00:23:41.528 "small_bufsize": 8192, 00:23:41.528 "large_bufsize": 135168, 00:23:41.528 "enable_numa": false 00:23:41.528 } 00:23:41.528 } 00:23:41.528 ] 00:23:41.528 }, 00:23:41.528 { 00:23:41.528 "subsystem": "sock", 00:23:41.528 "config": [ 00:23:41.528 { 00:23:41.528 "method": "sock_set_default_impl", 00:23:41.528 "params": { 00:23:41.528 "impl_name": "posix" 00:23:41.528 } 00:23:41.528 }, 00:23:41.528 { 00:23:41.528 "method": "sock_impl_set_options", 00:23:41.528 "params": { 00:23:41.528 "impl_name": "ssl", 00:23:41.528 "recv_buf_size": 4096, 00:23:41.528 "send_buf_size": 4096, 00:23:41.528 "enable_recv_pipe": true, 00:23:41.528 "enable_quickack": false, 00:23:41.528 "enable_placement_id": 0, 00:23:41.528 "enable_zerocopy_send_server": true, 00:23:41.528 "enable_zerocopy_send_client": false, 00:23:41.528 "zerocopy_threshold": 0, 00:23:41.528 "tls_version": 0, 00:23:41.528 "enable_ktls": false 00:23:41.528 } 00:23:41.528 }, 00:23:41.528 { 00:23:41.528 "method": "sock_impl_set_options", 00:23:41.528 "params": { 00:23:41.528 "impl_name": "posix", 00:23:41.528 "recv_buf_size": 2097152, 00:23:41.528 "send_buf_size": 2097152, 00:23:41.528 "enable_recv_pipe": true, 00:23:41.528 "enable_quickack": false, 00:23:41.528 "enable_placement_id": 0, 00:23:41.528 "enable_zerocopy_send_server": true, 00:23:41.528 "enable_zerocopy_send_client": false, 00:23:41.528 "zerocopy_threshold": 0, 00:23:41.528 "tls_version": 0, 00:23:41.528 "enable_ktls": false 00:23:41.528 } 00:23:41.528 } 00:23:41.528 ] 00:23:41.528 }, 00:23:41.528 { 00:23:41.528 "subsystem": "vmd", 00:23:41.528 "config": [] 00:23:41.528 }, 00:23:41.528 { 00:23:41.528 "subsystem": "accel", 00:23:41.528 "config": [ 00:23:41.528 { 00:23:41.528 "method": "accel_set_options", 00:23:41.528 "params": { 00:23:41.528 "small_cache_size": 128, 00:23:41.528 "large_cache_size": 16, 00:23:41.528 "task_count": 2048, 00:23:41.528 "sequence_count": 2048, 00:23:41.528 "buf_count": 2048 00:23:41.528 } 00:23:41.528 } 00:23:41.528 ] 00:23:41.528 }, 00:23:41.528 { 00:23:41.528 "subsystem": "bdev", 00:23:41.528 "config": [ 00:23:41.528 { 00:23:41.528 "method": "bdev_set_options", 00:23:41.528 "params": { 00:23:41.528 "bdev_io_pool_size": 65535, 00:23:41.528 "bdev_io_cache_size": 256, 00:23:41.528 "bdev_auto_examine": true, 00:23:41.528 "iobuf_small_cache_size": 128, 00:23:41.528 "iobuf_large_cache_size": 16 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "bdev_raid_set_options", 00:23:41.529 "params": { 00:23:41.529 "process_window_size_kb": 1024, 00:23:41.529 "process_max_bandwidth_mb_sec": 0 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "bdev_iscsi_set_options", 00:23:41.529 "params": { 00:23:41.529 "timeout_sec": 30 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "bdev_nvme_set_options", 00:23:41.529 "params": { 00:23:41.529 "action_on_timeout": "none", 00:23:41.529 "timeout_us": 0, 00:23:41.529 "timeout_admin_us": 0, 00:23:41.529 "keep_alive_timeout_ms": 10000, 00:23:41.529 "arbitration_burst": 0, 00:23:41.529 "low_priority_weight": 0, 00:23:41.529 "medium_priority_weight": 0, 00:23:41.529 "high_priority_weight": 0, 00:23:41.529 "nvme_adminq_poll_period_us": 10000, 00:23:41.529 "nvme_ioq_poll_period_us": 0, 00:23:41.529 "io_queue_requests": 0, 00:23:41.529 "delay_cmd_submit": true, 00:23:41.529 "transport_retry_count": 4, 00:23:41.529 "bdev_retry_count": 3, 00:23:41.529 "transport_ack_timeout": 0, 00:23:41.529 "ctrlr_loss_timeout_sec": 0, 00:23:41.529 "reconnect_delay_sec": 0, 00:23:41.529 "fast_io_fail_timeout_sec": 0, 00:23:41.529 "disable_auto_failback": false, 00:23:41.529 "generate_uuids": false, 00:23:41.529 "transport_tos": 0, 00:23:41.529 "nvme_error_stat": false, 00:23:41.529 "rdma_srq_size": 0, 00:23:41.529 "io_path_stat": false, 00:23:41.529 "allow_accel_sequence": false, 00:23:41.529 "rdma_max_cq_size": 0, 00:23:41.529 "rdma_cm_event_timeout_ms": 0, 00:23:41.529 "dhchap_digests": [ 00:23:41.529 "sha256", 00:23:41.529 "sha384", 00:23:41.529 "sha512" 00:23:41.529 ], 00:23:41.529 "dhchap_dhgroups": [ 00:23:41.529 "null", 00:23:41.529 "ffdhe2048", 00:23:41.529 "ffdhe3072", 00:23:41.529 "ffdhe4096", 00:23:41.529 "ffdhe6144", 00:23:41.529 "ffdhe8192" 00:23:41.529 ] 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "bdev_nvme_set_hotplug", 00:23:41.529 "params": { 00:23:41.529 "period_us": 100000, 00:23:41.529 "enable": false 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "bdev_malloc_create", 00:23:41.529 "params": { 00:23:41.529 "name": "malloc0", 00:23:41.529 "num_blocks": 8192, 00:23:41.529 "block_size": 4096, 00:23:41.529 "physical_block_size": 4096, 00:23:41.529 "uuid": "69bf38b3-ab6e-44dd-8b4b-b139cc8a6f92", 00:23:41.529 "optimal_io_boundary": 0, 00:23:41.529 "md_size": 0, 00:23:41.529 "dif_type": 0, 00:23:41.529 "dif_is_head_of_md": false, 00:23:41.529 "dif_pi_format": 0 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "bdev_wait_for_examine" 00:23:41.529 } 00:23:41.529 ] 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "subsystem": "nbd", 00:23:41.529 "config": [] 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "subsystem": "scheduler", 00:23:41.529 "config": [ 00:23:41.529 { 00:23:41.529 "method": "framework_set_scheduler", 00:23:41.529 "params": { 00:23:41.529 "name": "static" 00:23:41.529 } 00:23:41.529 } 00:23:41.529 ] 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "subsystem": "nvmf", 00:23:41.529 "config": [ 00:23:41.529 { 00:23:41.529 "method": "nvmf_set_config", 00:23:41.529 "params": { 00:23:41.529 "discovery_filter": "match_any", 00:23:41.529 "admin_cmd_passthru": { 00:23:41.529 "identify_ctrlr": false 00:23:41.529 }, 00:23:41.529 "dhchap_digests": [ 00:23:41.529 "sha256", 00:23:41.529 "sha384", 00:23:41.529 "sha512" 00:23:41.529 ], 00:23:41.529 "dhchap_dhgroups": [ 00:23:41.529 "null", 00:23:41.529 "ffdhe2048", 00:23:41.529 "ffdhe3072", 00:23:41.529 "ffdhe4096", 00:23:41.529 "ffdhe6144", 00:23:41.529 "ffdhe8192" 00:23:41.529 ] 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "nvmf_set_max_subsystems", 00:23:41.529 "params": { 00:23:41.529 "max_subsystems": 1024 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "nvmf_set_crdt", 00:23:41.529 "params": { 00:23:41.529 "crdt1": 0, 00:23:41.529 "crdt2": 0, 00:23:41.529 "crdt3": 0 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "nvmf_create_transport", 00:23:41.529 "params": { 00:23:41.529 "trtype": "TCP", 00:23:41.529 "max_queue_depth": 128, 00:23:41.529 "max_io_qpairs_per_ctrlr": 127, 00:23:41.529 "in_capsule_data_size": 4096, 00:23:41.529 "max_io_size": 131072, 00:23:41.529 "io_unit_size": 131072, 00:23:41.529 "max_aq_depth": 128, 00:23:41.529 "num_shared_buffers": 511, 00:23:41.529 "buf_cache_size": 4294967295, 00:23:41.529 "dif_insert_or_strip": false, 00:23:41.529 "zcopy": false, 00:23:41.529 "c2h_success": false, 00:23:41.529 "sock_priority": 0, 00:23:41.529 "abort_timeout_sec": 1, 00:23:41.529 "ack_timeout": 0, 00:23:41.529 "data_wr_pool_size": 0 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "nvmf_create_subsystem", 00:23:41.529 "params": { 00:23:41.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.529 "allow_any_host": false, 00:23:41.529 "serial_number": "00000000000000000000", 00:23:41.529 "model_number": "SPDK bdev Controller", 00:23:41.529 "max_namespaces": 32, 00:23:41.529 "min_cntlid": 1, 00:23:41.529 "max_cntlid": 65519, 00:23:41.529 "ana_reporting": false 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "nvmf_subsystem_add_host", 00:23:41.529 "params": { 00:23:41.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.529 "host": "nqn.2016-06.io.spdk:host1", 00:23:41.529 "psk": "key0" 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "nvmf_subsystem_add_ns", 00:23:41.529 "params": { 00:23:41.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.529 "namespace": { 00:23:41.529 "nsid": 1, 00:23:41.529 "bdev_name": "malloc0", 00:23:41.529 "nguid": "69BF38B3AB6E44DD8B4BB139CC8A6F92", 00:23:41.529 "uuid": "69bf38b3-ab6e-44dd-8b4b-b139cc8a6f92", 00:23:41.529 "no_auto_visible": false 00:23:41.529 } 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "nvmf_subsystem_add_listener", 00:23:41.529 "params": { 00:23:41.529 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.529 "listen_address": { 00:23:41.529 "trtype": "TCP", 00:23:41.529 "adrfam": "IPv4", 00:23:41.529 "traddr": "10.0.0.2", 00:23:41.529 "trsvcid": "4420" 00:23:41.529 }, 00:23:41.529 "secure_channel": false, 00:23:41.529 "sock_impl": "ssl" 00:23:41.529 } 00:23:41.529 } 00:23:41.529 ] 00:23:41.529 } 00:23:41.529 ] 00:23:41.529 }' 00:23:41.529 16:50:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:41.529 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:41.529 "subsystems": [ 00:23:41.529 { 00:23:41.529 "subsystem": "keyring", 00:23:41.529 "config": [ 00:23:41.529 { 00:23:41.529 "method": "keyring_file_add_key", 00:23:41.529 "params": { 00:23:41.529 "name": "key0", 00:23:41.529 "path": "/tmp/tmp.G0KVrez7L7" 00:23:41.529 } 00:23:41.529 } 00:23:41.529 ] 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "subsystem": "iobuf", 00:23:41.529 "config": [ 00:23:41.529 { 00:23:41.529 "method": "iobuf_set_options", 00:23:41.529 "params": { 00:23:41.529 "small_pool_count": 8192, 00:23:41.529 "large_pool_count": 1024, 00:23:41.529 "small_bufsize": 8192, 00:23:41.529 "large_bufsize": 135168, 00:23:41.529 "enable_numa": false 00:23:41.529 } 00:23:41.529 } 00:23:41.529 ] 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "subsystem": "sock", 00:23:41.529 "config": [ 00:23:41.529 { 00:23:41.529 "method": "sock_set_default_impl", 00:23:41.529 "params": { 00:23:41.529 "impl_name": "posix" 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "sock_impl_set_options", 00:23:41.529 "params": { 00:23:41.529 "impl_name": "ssl", 00:23:41.529 "recv_buf_size": 4096, 00:23:41.529 "send_buf_size": 4096, 00:23:41.529 "enable_recv_pipe": true, 00:23:41.529 "enable_quickack": false, 00:23:41.529 "enable_placement_id": 0, 00:23:41.529 "enable_zerocopy_send_server": true, 00:23:41.529 "enable_zerocopy_send_client": false, 00:23:41.529 "zerocopy_threshold": 0, 00:23:41.529 "tls_version": 0, 00:23:41.529 "enable_ktls": false 00:23:41.529 } 00:23:41.529 }, 00:23:41.529 { 00:23:41.529 "method": "sock_impl_set_options", 00:23:41.529 "params": { 00:23:41.529 "impl_name": "posix", 00:23:41.529 "recv_buf_size": 2097152, 00:23:41.529 "send_buf_size": 2097152, 00:23:41.529 "enable_recv_pipe": true, 00:23:41.529 "enable_quickack": false, 00:23:41.529 "enable_placement_id": 0, 00:23:41.529 "enable_zerocopy_send_server": true, 00:23:41.530 "enable_zerocopy_send_client": false, 00:23:41.530 "zerocopy_threshold": 0, 00:23:41.530 "tls_version": 0, 00:23:41.530 "enable_ktls": false 00:23:41.530 } 00:23:41.530 } 00:23:41.530 ] 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "subsystem": "vmd", 00:23:41.530 "config": [] 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "subsystem": "accel", 00:23:41.530 "config": [ 00:23:41.530 { 00:23:41.530 "method": "accel_set_options", 00:23:41.530 "params": { 00:23:41.530 "small_cache_size": 128, 00:23:41.530 "large_cache_size": 16, 00:23:41.530 "task_count": 2048, 00:23:41.530 "sequence_count": 2048, 00:23:41.530 "buf_count": 2048 00:23:41.530 } 00:23:41.530 } 00:23:41.530 ] 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "subsystem": "bdev", 00:23:41.530 "config": [ 00:23:41.530 { 00:23:41.530 "method": "bdev_set_options", 00:23:41.530 "params": { 00:23:41.530 "bdev_io_pool_size": 65535, 00:23:41.530 "bdev_io_cache_size": 256, 00:23:41.530 "bdev_auto_examine": true, 00:23:41.530 "iobuf_small_cache_size": 128, 00:23:41.530 "iobuf_large_cache_size": 16 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "method": "bdev_raid_set_options", 00:23:41.530 "params": { 00:23:41.530 "process_window_size_kb": 1024, 00:23:41.530 "process_max_bandwidth_mb_sec": 0 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "method": "bdev_iscsi_set_options", 00:23:41.530 "params": { 00:23:41.530 "timeout_sec": 30 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "method": "bdev_nvme_set_options", 00:23:41.530 "params": { 00:23:41.530 "action_on_timeout": "none", 00:23:41.530 "timeout_us": 0, 00:23:41.530 "timeout_admin_us": 0, 00:23:41.530 "keep_alive_timeout_ms": 10000, 00:23:41.530 "arbitration_burst": 0, 00:23:41.530 "low_priority_weight": 0, 00:23:41.530 "medium_priority_weight": 0, 00:23:41.530 "high_priority_weight": 0, 00:23:41.530 "nvme_adminq_poll_period_us": 10000, 00:23:41.530 "nvme_ioq_poll_period_us": 0, 00:23:41.530 "io_queue_requests": 512, 00:23:41.530 "delay_cmd_submit": true, 00:23:41.530 "transport_retry_count": 4, 00:23:41.530 "bdev_retry_count": 3, 00:23:41.530 "transport_ack_timeout": 0, 00:23:41.530 "ctrlr_loss_timeout_sec": 0, 00:23:41.530 "reconnect_delay_sec": 0, 00:23:41.530 "fast_io_fail_timeout_sec": 0, 00:23:41.530 "disable_auto_failback": false, 00:23:41.530 "generate_uuids": false, 00:23:41.530 "transport_tos": 0, 00:23:41.530 "nvme_error_stat": false, 00:23:41.530 "rdma_srq_size": 0, 00:23:41.530 "io_path_stat": false, 00:23:41.530 "allow_accel_sequence": false, 00:23:41.530 "rdma_max_cq_size": 0, 00:23:41.530 "rdma_cm_event_timeout_ms": 0, 00:23:41.530 "dhchap_digests": [ 00:23:41.530 "sha256", 00:23:41.530 "sha384", 00:23:41.530 "sha512" 00:23:41.530 ], 00:23:41.530 "dhchap_dhgroups": [ 00:23:41.530 "null", 00:23:41.530 "ffdhe2048", 00:23:41.530 "ffdhe3072", 00:23:41.530 "ffdhe4096", 00:23:41.530 "ffdhe6144", 00:23:41.530 "ffdhe8192" 00:23:41.530 ] 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "method": "bdev_nvme_attach_controller", 00:23:41.530 "params": { 00:23:41.530 "name": "nvme0", 00:23:41.530 "trtype": "TCP", 00:23:41.530 "adrfam": "IPv4", 00:23:41.530 "traddr": "10.0.0.2", 00:23:41.530 "trsvcid": "4420", 00:23:41.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.530 "prchk_reftag": false, 00:23:41.530 "prchk_guard": false, 00:23:41.530 "ctrlr_loss_timeout_sec": 0, 00:23:41.530 "reconnect_delay_sec": 0, 00:23:41.530 "fast_io_fail_timeout_sec": 0, 00:23:41.530 "psk": "key0", 00:23:41.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.530 "hdgst": false, 00:23:41.530 "ddgst": false, 00:23:41.530 "multipath": "multipath" 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "method": "bdev_nvme_set_hotplug", 00:23:41.530 "params": { 00:23:41.530 "period_us": 100000, 00:23:41.530 "enable": false 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "method": "bdev_enable_histogram", 00:23:41.530 "params": { 00:23:41.530 "name": "nvme0n1", 00:23:41.530 "enable": true 00:23:41.530 } 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "method": "bdev_wait_for_examine" 00:23:41.530 } 00:23:41.530 ] 00:23:41.530 }, 00:23:41.530 { 00:23:41.530 "subsystem": "nbd", 00:23:41.530 "config": [] 00:23:41.530 } 00:23:41.530 ] 00:23:41.530 }' 00:23:41.530 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2281151 00:23:41.530 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2281151 ']' 00:23:41.530 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2281151 00:23:41.530 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.530 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.530 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281151 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281151' 00:23:41.790 killing process with pid 2281151 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2281151 00:23:41.790 Received shutdown signal, test time was about 1.000000 seconds 00:23:41.790 00:23:41.790 Latency(us) 00:23:41.790 [2024-12-06T15:50:30.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.790 [2024-12-06T15:50:30.483Z] =================================================================================================================== 00:23:41.790 [2024-12-06T15:50:30.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2281151 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2281123 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2281123 ']' 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2281123 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281123 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281123' 00:23:41.790 killing process with pid 2281123 00:23:41.790 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2281123 00:23:41.791 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2281123 00:23:42.050 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:42.050 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:42.050 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.050 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.050 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:42.050 "subsystems": [ 00:23:42.050 { 00:23:42.050 "subsystem": "keyring", 00:23:42.050 "config": [ 00:23:42.050 { 00:23:42.050 "method": "keyring_file_add_key", 00:23:42.050 "params": { 00:23:42.050 "name": "key0", 00:23:42.050 "path": "/tmp/tmp.G0KVrez7L7" 00:23:42.050 } 00:23:42.050 } 00:23:42.050 ] 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "subsystem": "iobuf", 00:23:42.050 "config": [ 00:23:42.050 { 00:23:42.050 "method": "iobuf_set_options", 00:23:42.050 "params": { 00:23:42.050 "small_pool_count": 8192, 00:23:42.050 "large_pool_count": 1024, 00:23:42.050 "small_bufsize": 8192, 00:23:42.050 "large_bufsize": 135168, 00:23:42.050 "enable_numa": false 00:23:42.050 } 00:23:42.050 } 00:23:42.050 ] 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "subsystem": "sock", 00:23:42.050 "config": [ 00:23:42.050 { 00:23:42.050 "method": "sock_set_default_impl", 00:23:42.050 "params": { 00:23:42.050 "impl_name": "posix" 00:23:42.050 } 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "method": "sock_impl_set_options", 00:23:42.050 "params": { 00:23:42.050 "impl_name": "ssl", 00:23:42.050 "recv_buf_size": 4096, 00:23:42.050 "send_buf_size": 4096, 00:23:42.050 "enable_recv_pipe": true, 00:23:42.050 "enable_quickack": false, 00:23:42.050 "enable_placement_id": 0, 00:23:42.050 "enable_zerocopy_send_server": true, 00:23:42.050 "enable_zerocopy_send_client": false, 00:23:42.050 "zerocopy_threshold": 0, 00:23:42.050 "tls_version": 0, 00:23:42.050 "enable_ktls": false 00:23:42.050 } 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "method": "sock_impl_set_options", 00:23:42.050 "params": { 00:23:42.050 "impl_name": "posix", 00:23:42.050 "recv_buf_size": 2097152, 00:23:42.050 "send_buf_size": 2097152, 00:23:42.050 "enable_recv_pipe": true, 00:23:42.050 "enable_quickack": false, 00:23:42.050 "enable_placement_id": 0, 00:23:42.050 "enable_zerocopy_send_server": true, 00:23:42.050 "enable_zerocopy_send_client": false, 00:23:42.050 "zerocopy_threshold": 0, 00:23:42.050 "tls_version": 0, 00:23:42.050 "enable_ktls": false 00:23:42.050 } 00:23:42.050 } 00:23:42.050 ] 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "subsystem": "vmd", 00:23:42.050 "config": [] 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "subsystem": "accel", 00:23:42.050 "config": [ 00:23:42.050 { 00:23:42.050 "method": "accel_set_options", 00:23:42.050 "params": { 00:23:42.050 "small_cache_size": 128, 00:23:42.050 "large_cache_size": 16, 00:23:42.050 "task_count": 2048, 00:23:42.050 "sequence_count": 2048, 00:23:42.050 "buf_count": 2048 00:23:42.050 } 00:23:42.050 } 00:23:42.050 ] 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "subsystem": "bdev", 00:23:42.050 "config": [ 00:23:42.050 { 00:23:42.050 "method": "bdev_set_options", 00:23:42.050 "params": { 00:23:42.050 "bdev_io_pool_size": 65535, 00:23:42.050 "bdev_io_cache_size": 256, 00:23:42.050 "bdev_auto_examine": true, 00:23:42.050 "iobuf_small_cache_size": 128, 00:23:42.050 "iobuf_large_cache_size": 16 00:23:42.050 } 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "method": "bdev_raid_set_options", 00:23:42.050 "params": { 00:23:42.050 "process_window_size_kb": 1024, 00:23:42.050 "process_max_bandwidth_mb_sec": 0 00:23:42.050 } 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "method": "bdev_iscsi_set_options", 00:23:42.050 "params": { 00:23:42.050 "timeout_sec": 30 00:23:42.050 } 00:23:42.050 }, 00:23:42.050 { 00:23:42.050 "method": "bdev_nvme_set_options", 00:23:42.050 "params": { 00:23:42.050 "action_on_timeout": "none", 00:23:42.050 "timeout_us": 0, 00:23:42.050 "timeout_admin_us": 0, 00:23:42.050 "keep_alive_timeout_ms": 10000, 00:23:42.050 "arbitration_burst": 0, 00:23:42.050 "low_priority_weight": 0, 00:23:42.050 "medium_priority_weight": 0, 00:23:42.050 "high_priority_weight": 0, 00:23:42.050 "nvme_adminq_poll_period_us": 10000, 00:23:42.050 "nvme_ioq_poll_period_us": 0, 00:23:42.050 "io_queue_requests": 0, 00:23:42.050 "delay_cmd_submit": true, 00:23:42.050 "transport_retry_count": 4, 00:23:42.050 "bdev_retry_count": 3, 00:23:42.050 "transport_ack_timeout": 0, 00:23:42.050 "ctrlr_loss_timeout_sec": 0, 00:23:42.050 "reconnect_delay_sec": 0, 00:23:42.050 "fast_io_fail_timeout_sec": 0, 00:23:42.050 "disable_auto_failback": false, 00:23:42.050 "generate_uuids": false, 00:23:42.050 "transport_tos": 0, 00:23:42.050 "nvme_error_stat": false, 00:23:42.050 "rdma_srq_size": 0, 00:23:42.050 "io_path_stat": false, 00:23:42.050 "allow_accel_sequence": false, 00:23:42.050 "rdma_max_cq_size": 0, 00:23:42.050 "rdma_cm_event_timeout_ms": 0, 00:23:42.050 "dhchap_digests": [ 00:23:42.050 "sha256", 00:23:42.050 "sha384", 00:23:42.050 "sha512" 00:23:42.050 ], 00:23:42.050 "dhchap_dhgroups": [ 00:23:42.051 "null", 00:23:42.051 "ffdhe2048", 00:23:42.051 "ffdhe3072", 00:23:42.051 "ffdhe4096", 00:23:42.051 "ffdhe6144", 00:23:42.051 "ffdhe8192" 00:23:42.051 ] 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "bdev_nvme_set_hotplug", 00:23:42.051 "params": { 00:23:42.051 "period_us": 100000, 00:23:42.051 "enable": false 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "bdev_malloc_create", 00:23:42.051 "params": { 00:23:42.051 "name": "malloc0", 00:23:42.051 "num_blocks": 8192, 00:23:42.051 "block_size": 4096, 00:23:42.051 "physical_block_size": 4096, 00:23:42.051 "uuid": "69bf38b3-ab6e-44dd-8b4b-b139cc8a6f92", 00:23:42.051 "optimal_io_boundary": 0, 00:23:42.051 "md_size": 0, 00:23:42.051 "dif_type": 0, 00:23:42.051 "dif_is_head_of_md": false, 00:23:42.051 "dif_pi_format": 0 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "bdev_wait_for_examine" 00:23:42.051 } 00:23:42.051 ] 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "subsystem": "nbd", 00:23:42.051 "config": [] 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "subsystem": "scheduler", 00:23:42.051 "config": [ 00:23:42.051 { 00:23:42.051 "method": "framework_set_scheduler", 00:23:42.051 "params": { 00:23:42.051 "name": "static" 00:23:42.051 } 00:23:42.051 } 00:23:42.051 ] 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "subsystem": "nvmf", 00:23:42.051 "config": [ 00:23:42.051 { 00:23:42.051 "method": "nvmf_set_config", 00:23:42.051 "params": { 00:23:42.051 "discovery_filter": "match_any", 00:23:42.051 "admin_cmd_passthru": { 00:23:42.051 "identify_ctrlr": false 00:23:42.051 }, 00:23:42.051 "dhchap_digests": [ 00:23:42.051 "sha256", 00:23:42.051 "sha384", 00:23:42.051 "sha512" 00:23:42.051 ], 00:23:42.051 "dhchap_dhgroups": [ 00:23:42.051 "null", 00:23:42.051 "ffdhe2048", 00:23:42.051 "ffdhe3072", 00:23:42.051 "ffdhe4096", 00:23:42.051 "ffdhe6144", 00:23:42.051 "ffdhe8192" 00:23:42.051 ] 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "nvmf_set_max_subsystems", 00:23:42.051 "params": { 00:23:42.051 "max_subsystems": 1024 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "nvmf_set_crdt", 00:23:42.051 "params": { 00:23:42.051 "crdt1": 0, 00:23:42.051 "crdt2": 0, 00:23:42.051 "crdt3": 0 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "nvmf_create_transport", 00:23:42.051 "params": { 00:23:42.051 "trtype": "TCP", 00:23:42.051 "max_queue_depth": 128, 00:23:42.051 "max_io_qpairs_per_ctrlr": 127, 00:23:42.051 "in_capsule_data_size": 4096, 00:23:42.051 "max_io_size": 131072, 00:23:42.051 "io_unit_size": 131072, 00:23:42.051 "max_aq_depth": 128, 00:23:42.051 "num_shared_buffers": 511, 00:23:42.051 "buf_cache_size": 4294967295, 00:23:42.051 "dif_insert_or_strip": false, 00:23:42.051 "zcopy": false, 00:23:42.051 "c2h_success": false, 00:23:42.051 "sock_priority": 0, 00:23:42.051 "abort_timeout_sec": 1, 00:23:42.051 "ack_timeout": 0, 00:23:42.051 "data_wr_pool_size": 0 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "nvmf_create_subsystem", 00:23:42.051 "params": { 00:23:42.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.051 "allow_any_host": false, 00:23:42.051 "serial_number": "00000000000000000000", 00:23:42.051 "model_number": "SPDK bdev Controller", 00:23:42.051 "max_namespaces": 32, 00:23:42.051 "min_cntlid": 1, 00:23:42.051 "max_cntlid": 65519, 00:23:42.051 "ana_reporting": false 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "nvmf_subsystem_add_host", 00:23:42.051 "params": { 00:23:42.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.051 "host": "nqn.2016-06.io.spdk:host1", 00:23:42.051 "psk": "key0" 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "nvmf_subsystem_add_ns", 00:23:42.051 "params": { 00:23:42.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.051 "namespace": { 00:23:42.051 "nsid": 1, 00:23:42.051 "bdev_name": "malloc0", 00:23:42.051 "nguid": "69BF38B3AB6E44DD8B4BB139CC8A6F92", 00:23:42.051 "uuid": "69bf38b3-ab6e-44dd-8b4b-b139cc8a6f92", 00:23:42.051 "no_auto_visible": false 00:23:42.051 } 00:23:42.051 } 00:23:42.051 }, 00:23:42.051 { 00:23:42.051 "method": "nvmf_subsystem_add_listener", 00:23:42.051 "params": { 00:23:42.051 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.051 "listen_address": { 00:23:42.051 "trtype": "TCP", 00:23:42.051 "adrfam": "IPv4", 00:23:42.051 "traddr": "10.0.0.2", 00:23:42.051 "trsvcid": "4420" 00:23:42.051 }, 00:23:42.051 "secure_channel": false, 00:23:42.051 "sock_impl": "ssl" 00:23:42.051 } 00:23:42.051 } 00:23:42.051 ] 00:23:42.051 } 00:23:42.051 ] 00:23:42.051 }' 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2281572 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2281572 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2281572 ']' 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.051 16:50:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:42.051 [2024-12-06 16:50:30.563865] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:42.051 [2024-12-06 16:50:30.563918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.051 [2024-12-06 16:50:30.646527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.051 [2024-12-06 16:50:30.663490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.051 [2024-12-06 16:50:30.663526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.051 [2024-12-06 16:50:30.663533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.051 [2024-12-06 16:50:30.663540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.051 [2024-12-06 16:50:30.663546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.051 [2024-12-06 16:50:30.664155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.311 [2024-12-06 16:50:30.858246] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.311 [2024-12-06 16:50:30.890244] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:42.311 [2024-12-06 16:50:30.890475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.880 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.880 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.880 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:42.880 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:42.880 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.880 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.880 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2281858 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2281858 /var/tmp/bdevperf.sock 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2281858 ']' 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.881 16:50:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:42.881 "subsystems": [ 00:23:42.881 { 00:23:42.881 "subsystem": "keyring", 00:23:42.881 "config": [ 00:23:42.881 { 00:23:42.881 "method": "keyring_file_add_key", 00:23:42.881 "params": { 00:23:42.881 "name": "key0", 00:23:42.881 "path": "/tmp/tmp.G0KVrez7L7" 00:23:42.881 } 00:23:42.881 } 00:23:42.881 ] 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "subsystem": "iobuf", 00:23:42.881 "config": [ 00:23:42.881 { 00:23:42.881 "method": "iobuf_set_options", 00:23:42.881 "params": { 00:23:42.881 "small_pool_count": 8192, 00:23:42.881 "large_pool_count": 1024, 00:23:42.881 "small_bufsize": 8192, 00:23:42.881 "large_bufsize": 135168, 00:23:42.881 "enable_numa": false 00:23:42.881 } 00:23:42.881 } 00:23:42.881 ] 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "subsystem": "sock", 00:23:42.881 "config": [ 00:23:42.881 { 00:23:42.881 "method": "sock_set_default_impl", 00:23:42.881 "params": { 00:23:42.881 "impl_name": "posix" 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "sock_impl_set_options", 00:23:42.881 "params": { 00:23:42.881 "impl_name": "ssl", 00:23:42.881 "recv_buf_size": 4096, 00:23:42.881 "send_buf_size": 4096, 00:23:42.881 "enable_recv_pipe": true, 00:23:42.881 "enable_quickack": false, 00:23:42.881 "enable_placement_id": 0, 00:23:42.881 "enable_zerocopy_send_server": true, 00:23:42.881 "enable_zerocopy_send_client": false, 00:23:42.881 "zerocopy_threshold": 0, 00:23:42.881 "tls_version": 0, 00:23:42.881 "enable_ktls": false 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "sock_impl_set_options", 00:23:42.881 "params": { 00:23:42.881 "impl_name": "posix", 00:23:42.881 "recv_buf_size": 2097152, 00:23:42.881 "send_buf_size": 2097152, 00:23:42.881 "enable_recv_pipe": true, 00:23:42.881 "enable_quickack": false, 00:23:42.881 "enable_placement_id": 0, 00:23:42.881 "enable_zerocopy_send_server": true, 00:23:42.881 "enable_zerocopy_send_client": false, 00:23:42.881 "zerocopy_threshold": 0, 00:23:42.881 "tls_version": 0, 00:23:42.881 "enable_ktls": false 00:23:42.881 } 00:23:42.881 } 00:23:42.881 ] 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "subsystem": "vmd", 00:23:42.881 "config": [] 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "subsystem": "accel", 00:23:42.881 "config": [ 00:23:42.881 { 00:23:42.881 "method": "accel_set_options", 00:23:42.881 "params": { 00:23:42.881 "small_cache_size": 128, 00:23:42.881 "large_cache_size": 16, 00:23:42.881 "task_count": 2048, 00:23:42.881 "sequence_count": 2048, 00:23:42.881 "buf_count": 2048 00:23:42.881 } 00:23:42.881 } 00:23:42.881 ] 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "subsystem": "bdev", 00:23:42.881 "config": [ 00:23:42.881 { 00:23:42.881 "method": "bdev_set_options", 00:23:42.881 "params": { 00:23:42.881 "bdev_io_pool_size": 65535, 00:23:42.881 "bdev_io_cache_size": 256, 00:23:42.881 "bdev_auto_examine": true, 00:23:42.881 "iobuf_small_cache_size": 128, 00:23:42.881 "iobuf_large_cache_size": 16 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "bdev_raid_set_options", 00:23:42.881 "params": { 00:23:42.881 "process_window_size_kb": 1024, 00:23:42.881 "process_max_bandwidth_mb_sec": 0 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "bdev_iscsi_set_options", 00:23:42.881 "params": { 00:23:42.881 "timeout_sec": 30 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "bdev_nvme_set_options", 00:23:42.881 "params": { 00:23:42.881 "action_on_timeout": "none", 00:23:42.881 "timeout_us": 0, 00:23:42.881 "timeout_admin_us": 0, 00:23:42.881 "keep_alive_timeout_ms": 10000, 00:23:42.881 "arbitration_burst": 0, 00:23:42.881 "low_priority_weight": 0, 00:23:42.881 "medium_priority_weight": 0, 00:23:42.881 "high_priority_weight": 0, 00:23:42.881 "nvme_adminq_poll_period_us": 10000, 00:23:42.881 "nvme_ioq_poll_period_us": 0, 00:23:42.881 "io_queue_requests": 512, 00:23:42.881 "delay_cmd_submit": true, 00:23:42.881 "transport_retry_count": 4, 00:23:42.881 "bdev_retry_count": 3, 00:23:42.881 "transport_ack_timeout": 0, 00:23:42.881 "ctrlr_loss_timeout_sec": 0, 00:23:42.881 "reconnect_delay_sec": 0, 00:23:42.881 "fast_io_fail_timeout_sec": 0, 00:23:42.881 "disable_auto_failback": false, 00:23:42.881 "generate_uuids": false, 00:23:42.881 "transport_tos": 0, 00:23:42.881 "nvme_error_stat": false, 00:23:42.881 "rdma_srq_size": 0, 00:23:42.881 "io_path_stat": false, 00:23:42.881 "allow_accel_sequence": false, 00:23:42.881 "rdma_max_cq_size": 0, 00:23:42.881 "rdma_cm_event_timeout_ms": 0, 00:23:42.881 "dhchap_digests": [ 00:23:42.881 "sha256", 00:23:42.881 "sha384", 00:23:42.881 "sha512" 00:23:42.881 ], 00:23:42.881 "dhchap_dhgroups": [ 00:23:42.881 "null", 00:23:42.881 "ffdhe2048", 00:23:42.881 "ffdhe3072", 00:23:42.881 "ffdhe4096", 00:23:42.881 "ffdhe6144", 00:23:42.881 "ffdhe8192" 00:23:42.881 ] 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "bdev_nvme_attach_controller", 00:23:42.881 "params": { 00:23:42.881 "name": "nvme0", 00:23:42.881 "trtype": "TCP", 00:23:42.881 "adrfam": "IPv4", 00:23:42.881 "traddr": "10.0.0.2", 00:23:42.881 "trsvcid": "4420", 00:23:42.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.881 "prchk_reftag": false, 00:23:42.881 "prchk_guard": false, 00:23:42.881 "ctrlr_loss_timeout_sec": 0, 00:23:42.881 "reconnect_delay_sec": 0, 00:23:42.881 "fast_io_fail_timeout_sec": 0, 00:23:42.881 "psk": "key0", 00:23:42.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.881 "hdgst": false, 00:23:42.881 "ddgst": false, 00:23:42.881 "multipath": "multipath" 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "bdev_nvme_set_hotplug", 00:23:42.881 "params": { 00:23:42.881 "period_us": 100000, 00:23:42.881 "enable": false 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "bdev_enable_histogram", 00:23:42.881 "params": { 00:23:42.881 "name": "nvme0n1", 00:23:42.881 "enable": true 00:23:42.881 } 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "method": "bdev_wait_for_examine" 00:23:42.881 } 00:23:42.881 ] 00:23:42.881 }, 00:23:42.881 { 00:23:42.881 "subsystem": "nbd", 00:23:42.881 "config": [] 00:23:42.881 } 00:23:42.881 ] 00:23:42.881 }' 00:23:42.881 [2024-12-06 16:50:31.397540] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:42.881 [2024-12-06 16:50:31.397594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281858 ] 00:23:42.882 [2024-12-06 16:50:31.461124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.882 [2024-12-06 16:50:31.477638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.142 [2024-12-06 16:50:31.608592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.710 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.710 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:43.710 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.710 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:43.710 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.710 16:50:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.970 Running I/O for 1 seconds... 00:23:44.909 3508.00 IOPS, 13.70 MiB/s 00:23:44.909 Latency(us) 00:23:44.909 [2024-12-06T15:50:33.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.909 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:44.909 Verification LBA range: start 0x0 length 0x2000 00:23:44.909 nvme0n1 : 1.06 3425.75 13.38 0.00 0.00 36368.73 4450.99 134567.25 00:23:44.909 [2024-12-06T15:50:33.602Z] =================================================================================================================== 00:23:44.909 [2024-12-06T15:50:33.602Z] Total : 3425.75 13.38 0.00 0.00 36368.73 4450.99 134567.25 00:23:44.909 { 00:23:44.909 "results": [ 00:23:44.909 { 00:23:44.909 "job": "nvme0n1", 00:23:44.909 "core_mask": "0x2", 00:23:44.909 "workload": "verify", 00:23:44.909 "status": "finished", 00:23:44.909 "verify_range": { 00:23:44.909 "start": 0, 00:23:44.909 "length": 8192 00:23:44.909 }, 00:23:44.909 "queue_depth": 128, 00:23:44.909 "io_size": 4096, 00:23:44.909 "runtime": 1.061665, 00:23:44.909 "iops": 3425.7510608336906, 00:23:44.909 "mibps": 13.381840081381604, 00:23:44.909 "io_failed": 0, 00:23:44.909 "io_timeout": 0, 00:23:44.909 "avg_latency_us": 36368.73349830446, 00:23:44.909 "min_latency_us": 4450.986666666667, 00:23:44.909 "max_latency_us": 134567.25333333333 00:23:44.909 } 00:23:44.909 ], 00:23:44.909 "core_count": 1 00:23:44.909 } 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:44.909 nvmf_trace.0 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2281858 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2281858 ']' 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2281858 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.909 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281858 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281858' 00:23:45.168 killing process with pid 2281858 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2281858 00:23:45.168 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.168 00:23:45.168 Latency(us) 00:23:45.168 [2024-12-06T15:50:33.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.168 [2024-12-06T15:50:33.861Z] =================================================================================================================== 00:23:45.168 [2024-12-06T15:50:33.861Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2281858 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:45.168 rmmod nvme_tcp 00:23:45.168 rmmod nvme_fabrics 00:23:45.168 rmmod nvme_keyring 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2281572 ']' 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2281572 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2281572 ']' 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2281572 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281572 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281572' 00:23:45.168 killing process with pid 2281572 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2281572 00:23:45.168 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2281572 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.428 16:50:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.331 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.331 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.DtTbt8SnhP /tmp/tmp.pXIjZfiuNP /tmp/tmp.G0KVrez7L7 00:23:47.331 00:23:47.331 real 1m14.148s 00:23:47.331 user 1m57.990s 00:23:47.331 sys 0m22.407s 00:23:47.331 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:47.331 16:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.331 ************************************ 00:23:47.331 END TEST nvmf_tls 00:23:47.331 ************************************ 00:23:47.331 16:50:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:47.331 16:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:47.331 16:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:47.331 16:50:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:47.591 ************************************ 00:23:47.591 START TEST nvmf_fips 00:23:47.591 ************************************ 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:47.591 * Looking for test storage... 00:23:47.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.591 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.592 --rc genhtml_branch_coverage=1 00:23:47.592 --rc genhtml_function_coverage=1 00:23:47.592 --rc genhtml_legend=1 00:23:47.592 --rc geninfo_all_blocks=1 00:23:47.592 --rc geninfo_unexecuted_blocks=1 00:23:47.592 00:23:47.592 ' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.592 --rc genhtml_branch_coverage=1 00:23:47.592 --rc genhtml_function_coverage=1 00:23:47.592 --rc genhtml_legend=1 00:23:47.592 --rc geninfo_all_blocks=1 00:23:47.592 --rc geninfo_unexecuted_blocks=1 00:23:47.592 00:23:47.592 ' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.592 --rc genhtml_branch_coverage=1 00:23:47.592 --rc genhtml_function_coverage=1 00:23:47.592 --rc genhtml_legend=1 00:23:47.592 --rc geninfo_all_blocks=1 00:23:47.592 --rc geninfo_unexecuted_blocks=1 00:23:47.592 00:23:47.592 ' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:47.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.592 --rc genhtml_branch_coverage=1 00:23:47.592 --rc genhtml_function_coverage=1 00:23:47.592 --rc genhtml_legend=1 00:23:47.592 --rc geninfo_all_blocks=1 00:23:47.592 --rc geninfo_unexecuted_blocks=1 00:23:47.592 00:23:47.592 ' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:47.592 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:47.593 Error setting digest 00:23:47.593 40223B99DA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:47.593 40223B99DA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.593 16:50:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.861 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:52.862 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:52.862 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:52.862 Found net devices under 0000:31:00.0: cvl_0_0 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:52.862 Found net devices under 0000:31:00.1: cvl_0_1 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.862 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:23:53.121 00:23:53.121 --- 10.0.0.2 ping statistics --- 00:23:53.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.121 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:53.121 00:23:53.121 --- 10.0.0.1 ping statistics --- 00:23:53.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.121 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2286874 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2286874 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2286874 ']' 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.121 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.121 [2024-12-06 16:50:41.744074] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:53.121 [2024-12-06 16:50:41.744135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.379 [2024-12-06 16:50:41.815064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.379 [2024-12-06 16:50:41.830433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.379 [2024-12-06 16:50:41.830461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.379 [2024-12-06 16:50:41.830467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.379 [2024-12-06 16:50:41.830472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.379 [2024-12-06 16:50:41.830476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.379 [2024-12-06 16:50:41.830962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.XWB 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.XWB 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.XWB 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.XWB 00:23:53.379 16:50:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.638 [2024-12-06 16:50:42.077347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.638 [2024-12-06 16:50:42.093353] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.638 [2024-12-06 16:50:42.093551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.638 malloc0 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2286934 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2286934 /var/tmp/bdevperf.sock 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2286934 ']' 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.638 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.638 [2024-12-06 16:50:42.195772] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:23:53.638 [2024-12-06 16:50:42.195825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286934 ] 00:23:53.638 [2024-12-06 16:50:42.274010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.638 [2024-12-06 16:50:42.291687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.570 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.570 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:54.570 16:50:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.XWB 00:23:54.570 16:50:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:54.829 [2024-12-06 16:50:43.270187] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.829 TLSTESTn1 00:23:54.829 16:50:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.829 Running I/O for 10 seconds... 00:23:57.142 3530.00 IOPS, 13.79 MiB/s [2024-12-06T15:50:46.769Z] 3834.00 IOPS, 14.98 MiB/s [2024-12-06T15:50:47.701Z] 4101.00 IOPS, 16.02 MiB/s [2024-12-06T15:50:48.635Z] 4356.75 IOPS, 17.02 MiB/s [2024-12-06T15:50:49.583Z] 4320.00 IOPS, 16.88 MiB/s [2024-12-06T15:50:50.517Z] 4256.00 IOPS, 16.62 MiB/s [2024-12-06T15:50:51.891Z] 4290.86 IOPS, 16.76 MiB/s [2024-12-06T15:50:52.826Z] 4403.62 IOPS, 17.20 MiB/s [2024-12-06T15:50:53.760Z] 4368.89 IOPS, 17.07 MiB/s [2024-12-06T15:50:53.760Z] 4326.40 IOPS, 16.90 MiB/s 00:24:05.067 Latency(us) 00:24:05.067 [2024-12-06T15:50:53.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.067 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.067 Verification LBA range: start 0x0 length 0x2000 00:24:05.067 TLSTESTn1 : 10.04 4320.82 16.88 0.00 0.00 29558.47 6717.44 86944.43 00:24:05.067 [2024-12-06T15:50:53.760Z] =================================================================================================================== 00:24:05.067 [2024-12-06T15:50:53.760Z] Total : 4320.82 16.88 0.00 0.00 29558.47 6717.44 86944.43 00:24:05.067 { 00:24:05.067 "results": [ 00:24:05.067 { 00:24:05.067 "job": "TLSTESTn1", 00:24:05.068 "core_mask": "0x4", 00:24:05.068 "workload": "verify", 00:24:05.068 "status": "finished", 00:24:05.068 "verify_range": { 00:24:05.068 "start": 0, 00:24:05.068 "length": 8192 00:24:05.068 }, 00:24:05.068 "queue_depth": 128, 00:24:05.068 "io_size": 4096, 00:24:05.068 "runtime": 10.042538, 00:24:05.068 "iops": 4320.820095477857, 00:24:05.068 "mibps": 16.878203497960378, 00:24:05.068 "io_failed": 0, 00:24:05.068 "io_timeout": 0, 00:24:05.068 "avg_latency_us": 29558.465447394297, 00:24:05.068 "min_latency_us": 6717.44, 00:24:05.068 "max_latency_us": 86944.42666666667 00:24:05.068 } 00:24:05.068 ], 00:24:05.068 "core_count": 1 00:24:05.068 } 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:05.068 nvmf_trace.0 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2286934 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2286934 ']' 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2286934 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286934 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286934' 00:24:05.068 killing process with pid 2286934 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2286934 00:24:05.068 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.068 00:24:05.068 Latency(us) 00:24:05.068 [2024-12-06T15:50:53.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.068 [2024-12-06T15:50:53.761Z] =================================================================================================================== 00:24:05.068 [2024-12-06T15:50:53.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2286934 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.068 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.068 rmmod nvme_tcp 00:24:05.326 rmmod nvme_fabrics 00:24:05.326 rmmod nvme_keyring 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2286874 ']' 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2286874 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2286874 ']' 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2286874 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2286874 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2286874' 00:24:05.326 killing process with pid 2286874 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2286874 00:24:05.326 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2286874 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.327 16:50:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.XWB 00:24:07.880 00:24:07.880 real 0m19.985s 00:24:07.880 user 0m23.096s 00:24:07.880 sys 0m7.540s 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.880 ************************************ 00:24:07.880 END TEST nvmf_fips 00:24:07.880 ************************************ 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:07.880 ************************************ 00:24:07.880 START TEST nvmf_control_msg_list 00:24:07.880 ************************************ 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:07.880 * Looking for test storage... 00:24:07.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:07.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.880 --rc genhtml_branch_coverage=1 00:24:07.880 --rc genhtml_function_coverage=1 00:24:07.880 --rc genhtml_legend=1 00:24:07.880 --rc geninfo_all_blocks=1 00:24:07.880 --rc geninfo_unexecuted_blocks=1 00:24:07.880 00:24:07.880 ' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:07.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.880 --rc genhtml_branch_coverage=1 00:24:07.880 --rc genhtml_function_coverage=1 00:24:07.880 --rc genhtml_legend=1 00:24:07.880 --rc geninfo_all_blocks=1 00:24:07.880 --rc geninfo_unexecuted_blocks=1 00:24:07.880 00:24:07.880 ' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:07.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.880 --rc genhtml_branch_coverage=1 00:24:07.880 --rc genhtml_function_coverage=1 00:24:07.880 --rc genhtml_legend=1 00:24:07.880 --rc geninfo_all_blocks=1 00:24:07.880 --rc geninfo_unexecuted_blocks=1 00:24:07.880 00:24:07.880 ' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:07.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.880 --rc genhtml_branch_coverage=1 00:24:07.880 --rc genhtml_function_coverage=1 00:24:07.880 --rc genhtml_legend=1 00:24:07.880 --rc geninfo_all_blocks=1 00:24:07.880 --rc geninfo_unexecuted_blocks=1 00:24:07.880 00:24:07.880 ' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.880 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:07.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:07.881 16:50:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.156 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:13.157 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:13.157 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:13.157 Found net devices under 0000:31:00.0: cvl_0_0 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:13.157 Found net devices under 0000:31:00.1: cvl_0_1 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:24:13.157 00:24:13.157 --- 10.0.0.2 ping statistics --- 00:24:13.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.157 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:24:13.157 00:24:13.157 --- 10.0.0.1 ping statistics --- 00:24:13.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.157 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2294027 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2294027 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2294027 ']' 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.157 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.158 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.158 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.158 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.158 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:13.158 [2024-12-06 16:51:01.689426] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:24:13.158 [2024-12-06 16:51:01.689476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.158 [2024-12-06 16:51:01.774922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.158 [2024-12-06 16:51:01.791415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.158 [2024-12-06 16:51:01.791449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.158 [2024-12-06 16:51:01.791457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.158 [2024-12-06 16:51:01.791464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.158 [2024-12-06 16:51:01.791471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.158 [2024-12-06 16:51:01.792010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.418 [2024-12-06 16:51:01.889040] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.418 Malloc0 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.418 [2024-12-06 16:51:01.923994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2294064 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2294065 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2294066 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2294064 00:24:13.418 16:51:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.418 [2024-12-06 16:51:01.962338] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.418 [2024-12-06 16:51:01.972259] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.418 [2024-12-06 16:51:01.982147] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:14.351 Initializing NVMe Controllers 00:24:14.351 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.351 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:14.351 Initialization complete. Launching workers. 00:24:14.351 ======================================================== 00:24:14.351 Latency(us) 00:24:14.351 Device Information : IOPS MiB/s Average min max 00:24:14.351 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2315.00 9.04 431.85 117.70 855.22 00:24:14.351 ======================================================== 00:24:14.351 Total : 2315.00 9.04 431.85 117.70 855.22 00:24:14.351 00:24:14.611 Initializing NVMe Controllers 00:24:14.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:14.611 Initialization complete. Launching workers. 00:24:14.611 ======================================================== 00:24:14.611 Latency(us) 00:24:14.611 Device Information : IOPS MiB/s Average min max 00:24:14.611 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1613.00 6.30 619.85 226.83 847.96 00:24:14.611 ======================================================== 00:24:14.611 Total : 1613.00 6.30 619.85 226.83 847.96 00:24:14.611 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2294065 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2294066 00:24:14.611 Initializing NVMe Controllers 00:24:14.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.611 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:14.611 Initialization complete. Launching workers. 00:24:14.611 ======================================================== 00:24:14.611 Latency(us) 00:24:14.611 Device Information : IOPS MiB/s Average min max 00:24:14.611 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1742.00 6.80 573.78 127.64 953.88 00:24:14.611 ======================================================== 00:24:14.611 Total : 1742.00 6.80 573.78 127.64 953.88 00:24:14.611 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.611 rmmod nvme_tcp 00:24:14.611 rmmod nvme_fabrics 00:24:14.611 rmmod nvme_keyring 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2294027 ']' 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2294027 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2294027 ']' 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2294027 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2294027 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2294027' 00:24:14.611 killing process with pid 2294027 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2294027 00:24:14.611 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2294027 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.870 16:51:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.773 00:24:16.773 real 0m9.337s 00:24:16.773 user 0m6.156s 00:24:16.773 sys 0m4.776s 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:16.773 ************************************ 00:24:16.773 END TEST nvmf_control_msg_list 00:24:16.773 ************************************ 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.773 ************************************ 00:24:16.773 START TEST nvmf_wait_for_buf 00:24:16.773 ************************************ 00:24:16.773 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:17.034 * Looking for test storage... 00:24:17.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.034 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:17.034 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:17.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.035 --rc genhtml_branch_coverage=1 00:24:17.035 --rc genhtml_function_coverage=1 00:24:17.035 --rc genhtml_legend=1 00:24:17.035 --rc geninfo_all_blocks=1 00:24:17.035 --rc geninfo_unexecuted_blocks=1 00:24:17.035 00:24:17.035 ' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:17.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.035 --rc genhtml_branch_coverage=1 00:24:17.035 --rc genhtml_function_coverage=1 00:24:17.035 --rc genhtml_legend=1 00:24:17.035 --rc geninfo_all_blocks=1 00:24:17.035 --rc geninfo_unexecuted_blocks=1 00:24:17.035 00:24:17.035 ' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:17.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.035 --rc genhtml_branch_coverage=1 00:24:17.035 --rc genhtml_function_coverage=1 00:24:17.035 --rc genhtml_legend=1 00:24:17.035 --rc geninfo_all_blocks=1 00:24:17.035 --rc geninfo_unexecuted_blocks=1 00:24:17.035 00:24:17.035 ' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:17.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.035 --rc genhtml_branch_coverage=1 00:24:17.035 --rc genhtml_function_coverage=1 00:24:17.035 --rc genhtml_legend=1 00:24:17.035 --rc geninfo_all_blocks=1 00:24:17.035 --rc geninfo_unexecuted_blocks=1 00:24:17.035 00:24:17.035 ' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.035 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.036 16:51:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:22.314 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.314 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.314 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.314 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.314 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.314 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.314 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:22.315 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:22.315 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:22.315 Found net devices under 0000:31:00.0: cvl_0_0 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:22.315 Found net devices under 0000:31:00.1: cvl_0_1 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.315 16:51:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:24:22.575 00:24:22.575 --- 10.0.0.2 ping statistics --- 00:24:22.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.575 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:24:22.575 00:24:22.575 --- 10.0.0.1 ping statistics --- 00:24:22.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.575 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.575 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2299182 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2299182 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2299182 ']' 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.834 16:51:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:22.834 [2024-12-06 16:51:11.303238] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:24:22.834 [2024-12-06 16:51:11.303294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.834 [2024-12-06 16:51:11.388135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.834 [2024-12-06 16:51:11.409047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.834 [2024-12-06 16:51:11.409088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.834 [2024-12-06 16:51:11.409097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.834 [2024-12-06 16:51:11.409110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.834 [2024-12-06 16:51:11.409117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.834 [2024-12-06 16:51:11.409838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.400 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.400 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:23.400 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.400 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.400 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 Malloc0 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 [2024-12-06 16:51:12.187510] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.670 [2024-12-06 16:51:12.211710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.670 16:51:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.670 [2024-12-06 16:51:12.293187] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:25.149 Initializing NVMe Controllers 00:24:25.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:25.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:25.149 Initialization complete. Launching workers. 00:24:25.149 ======================================================== 00:24:25.149 Latency(us) 00:24:25.149 Device Information : IOPS MiB/s Average min max 00:24:25.149 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165841.30 47868.75 191555.84 00:24:25.149 ======================================================== 00:24:25.149 Total : 25.00 3.12 165841.30 47868.75 191555.84 00:24:25.149 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:25.408 rmmod nvme_tcp 00:24:25.408 rmmod nvme_fabrics 00:24:25.408 rmmod nvme_keyring 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2299182 ']' 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2299182 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2299182 ']' 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2299182 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2299182 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2299182' 00:24:25.408 killing process with pid 2299182 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2299182 00:24:25.408 16:51:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2299182 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.408 16:51:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.945 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.945 00:24:27.945 real 0m10.660s 00:24:27.945 user 0m4.319s 00:24:27.945 sys 0m4.700s 00:24:27.945 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.945 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.945 ************************************ 00:24:27.945 END TEST nvmf_wait_for_buf 00:24:27.945 ************************************ 00:24:27.945 16:51:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:27.946 ************************************ 00:24:27.946 START TEST nvmf_fuzz 00:24:27.946 ************************************ 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:27.946 * Looking for test storage... 00:24:27.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.946 --rc genhtml_branch_coverage=1 00:24:27.946 --rc genhtml_function_coverage=1 00:24:27.946 --rc genhtml_legend=1 00:24:27.946 --rc geninfo_all_blocks=1 00:24:27.946 --rc geninfo_unexecuted_blocks=1 00:24:27.946 00:24:27.946 ' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.946 --rc genhtml_branch_coverage=1 00:24:27.946 --rc genhtml_function_coverage=1 00:24:27.946 --rc genhtml_legend=1 00:24:27.946 --rc geninfo_all_blocks=1 00:24:27.946 --rc geninfo_unexecuted_blocks=1 00:24:27.946 00:24:27.946 ' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.946 --rc genhtml_branch_coverage=1 00:24:27.946 --rc genhtml_function_coverage=1 00:24:27.946 --rc genhtml_legend=1 00:24:27.946 --rc geninfo_all_blocks=1 00:24:27.946 --rc geninfo_unexecuted_blocks=1 00:24:27.946 00:24:27.946 ' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.946 --rc genhtml_branch_coverage=1 00:24:27.946 --rc genhtml_function_coverage=1 00:24:27.946 --rc genhtml_legend=1 00:24:27.946 --rc geninfo_all_blocks=1 00:24:27.946 --rc geninfo_unexecuted_blocks=1 00:24:27.946 00:24:27.946 ' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.946 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.947 16:51:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.231 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:33.232 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:33.232 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:33.232 Found net devices under 0000:31:00.0: cvl_0_0 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:33.232 Found net devices under 0000:31:00.1: cvl_0_1 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:24:33.232 00:24:33.232 --- 10.0.0.2 ping statistics --- 00:24:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.232 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:24:33.232 00:24:33.232 --- 10.0.0.1 ping statistics --- 00:24:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.232 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.232 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2304095 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2304095 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2304095 ']' 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.233 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.492 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.492 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:33.492 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.492 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.493 Malloc0 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:33.493 16:51:21 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:05.571 Fuzzing completed. Shutting down the fuzz application 00:25:05.571 00:25:05.571 Dumping successful admin opcodes: 00:25:05.571 9, 10, 00:25:05.571 Dumping successful io opcodes: 00:25:05.571 0, 9, 00:25:05.571 NS: 0x2000008eff00 I/O qp, Total commands completed: 1093214, total successful commands: 6419, random_seed: 2587225536 00:25:05.571 NS: 0x2000008eff00 admin qp, Total commands completed: 135488, total successful commands: 30, random_seed: 761382976 00:25:05.571 16:51:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:05.571 Fuzzing completed. Shutting down the fuzz application 00:25:05.571 00:25:05.571 Dumping successful admin opcodes: 00:25:05.571 00:25:05.571 Dumping successful io opcodes: 00:25:05.571 00:25:05.571 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 489013957 00:25:05.571 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 489086173 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:05.571 rmmod nvme_tcp 00:25:05.571 rmmod nvme_fabrics 00:25:05.571 rmmod nvme_keyring 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 2304095 ']' 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 2304095 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2304095 ']' 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 2304095 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304095 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304095' 00:25:05.571 killing process with pid 2304095 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 2304095 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 2304095 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.571 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.572 16:51:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:07.478 00:25:07.478 real 0m39.691s 00:25:07.478 user 0m53.436s 00:25:07.478 sys 0m15.241s 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.478 ************************************ 00:25:07.478 END TEST nvmf_fuzz 00:25:07.478 ************************************ 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:07.478 ************************************ 00:25:07.478 START TEST nvmf_multiconnection 00:25:07.478 ************************************ 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:07.478 * Looking for test storage... 00:25:07.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:07.478 16:51:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:07.478 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.479 --rc genhtml_branch_coverage=1 00:25:07.479 --rc genhtml_function_coverage=1 00:25:07.479 --rc genhtml_legend=1 00:25:07.479 --rc geninfo_all_blocks=1 00:25:07.479 --rc geninfo_unexecuted_blocks=1 00:25:07.479 00:25:07.479 ' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.479 --rc genhtml_branch_coverage=1 00:25:07.479 --rc genhtml_function_coverage=1 00:25:07.479 --rc genhtml_legend=1 00:25:07.479 --rc geninfo_all_blocks=1 00:25:07.479 --rc geninfo_unexecuted_blocks=1 00:25:07.479 00:25:07.479 ' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.479 --rc genhtml_branch_coverage=1 00:25:07.479 --rc genhtml_function_coverage=1 00:25:07.479 --rc genhtml_legend=1 00:25:07.479 --rc geninfo_all_blocks=1 00:25:07.479 --rc geninfo_unexecuted_blocks=1 00:25:07.479 00:25:07.479 ' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:07.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.479 --rc genhtml_branch_coverage=1 00:25:07.479 --rc genhtml_function_coverage=1 00:25:07.479 --rc genhtml_legend=1 00:25:07.479 --rc geninfo_all_blocks=1 00:25:07.479 --rc geninfo_unexecuted_blocks=1 00:25:07.479 00:25:07.479 ' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.479 16:51:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:12.758 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:12.759 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:12.759 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:12.759 Found net devices under 0000:31:00.0: cvl_0_0 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:12.759 Found net devices under 0000:31:00.1: cvl_0_1 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:12.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:12.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:25:12.759 00:25:12.759 --- 10.0.0.2 ping statistics --- 00:25:12.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.759 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:12.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:12.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:25:12.759 00:25:12.759 --- 10.0.0.1 ping statistics --- 00:25:12.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:12.759 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:12.759 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=2315202 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 2315202 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 2315202 ']' 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.760 16:52:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:12.760 [2024-12-06 16:52:01.372306] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:25:12.760 [2024-12-06 16:52:01.372371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.017 [2024-12-06 16:52:01.463994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.017 [2024-12-06 16:52:01.493416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.017 [2024-12-06 16:52:01.493468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.017 [2024-12-06 16:52:01.493479] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.017 [2024-12-06 16:52:01.493486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.017 [2024-12-06 16:52:01.493493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.017 [2024-12-06 16:52:01.495692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.017 [2024-12-06 16:52:01.495856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.017 [2024-12-06 16:52:01.496015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.017 [2024-12-06 16:52:01.496016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.586 [2024-12-06 16:52:02.188259] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.586 Malloc1 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.586 [2024-12-06 16:52:02.253889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.586 Malloc2 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.586 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 Malloc3 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 Malloc4 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.846 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 Malloc5 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 Malloc6 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 Malloc7 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 Malloc8 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.847 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 Malloc9 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 Malloc10 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 Malloc11 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.107 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.108 16:52:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:16.010 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:16.010 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:16.010 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:16.010 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:16.010 16:52:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.915 16:52:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:19.297 16:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:19.297 16:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:19.297 16:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.297 16:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:19.297 16:52:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.204 16:52:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:23.113 16:52:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:23.114 16:52:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:23.114 16:52:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.114 16:52:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:23.114 16:52:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.017 16:52:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:26.397 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:26.397 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.397 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.397 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:26.397 16:52:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.299 16:52:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:30.205 16:52:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:30.205 16:52:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:30.205 16:52:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:30.205 16:52:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:30.205 16:52:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:32.111 16:52:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:34.015 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:34.015 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:34.015 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:34.015 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:34.015 16:52:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:35.924 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:35.924 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:35.924 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:35.924 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:35.924 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:35.925 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:35.925 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:35.925 16:52:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:37.828 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:37.828 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:37.828 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:37.828 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:37.828 16:52:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:39.730 16:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:41.107 16:52:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:41.107 16:52:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.107 16:52:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.107 16:52:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.107 16:52:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.650 16:52:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:45.027 16:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:45.027 16:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.027 16:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.027 16:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.027 16:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:46.927 16:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:49.030 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:49.030 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:49.030 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.030 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:49.030 16:52:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.937 16:52:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:52.842 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:52.842 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:52.842 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.842 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:52.842 16:52:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:54.747 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:54.747 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:54.747 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:54.747 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:54.747 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.747 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:54.747 16:52:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:54.747 [global] 00:25:54.747 thread=1 00:25:54.747 invalidate=1 00:25:54.747 rw=read 00:25:54.747 time_based=1 00:25:54.747 runtime=10 00:25:54.747 ioengine=libaio 00:25:54.747 direct=1 00:25:54.747 bs=262144 00:25:54.747 iodepth=64 00:25:54.747 norandommap=1 00:25:54.747 numjobs=1 00:25:54.747 00:25:54.747 [job0] 00:25:54.747 filename=/dev/nvme0n1 00:25:54.747 [job1] 00:25:54.747 filename=/dev/nvme10n1 00:25:54.747 [job2] 00:25:54.747 filename=/dev/nvme1n1 00:25:54.747 [job3] 00:25:54.747 filename=/dev/nvme2n1 00:25:54.747 [job4] 00:25:54.747 filename=/dev/nvme3n1 00:25:54.747 [job5] 00:25:54.747 filename=/dev/nvme4n1 00:25:54.747 [job6] 00:25:54.747 filename=/dev/nvme5n1 00:25:54.747 [job7] 00:25:54.747 filename=/dev/nvme6n1 00:25:54.747 [job8] 00:25:54.747 filename=/dev/nvme7n1 00:25:54.747 [job9] 00:25:54.747 filename=/dev/nvme8n1 00:25:54.747 [job10] 00:25:54.747 filename=/dev/nvme9n1 00:25:55.015 Could not set queue depth (nvme0n1) 00:25:55.015 Could not set queue depth (nvme10n1) 00:25:55.015 Could not set queue depth (nvme1n1) 00:25:55.015 Could not set queue depth (nvme2n1) 00:25:55.015 Could not set queue depth (nvme3n1) 00:25:55.015 Could not set queue depth (nvme4n1) 00:25:55.015 Could not set queue depth (nvme5n1) 00:25:55.015 Could not set queue depth (nvme6n1) 00:25:55.015 Could not set queue depth (nvme7n1) 00:25:55.015 Could not set queue depth (nvme8n1) 00:25:55.015 Could not set queue depth (nvme9n1) 00:25:55.276 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:55.276 fio-3.35 00:25:55.276 Starting 11 threads 00:26:07.475 00:26:07.475 job0: (groupid=0, jobs=1): err= 0: pid=2324974: Fri Dec 6 16:52:54 2024 00:26:07.475 read: IOPS=224, BW=56.1MiB/s (58.8MB/s)(571MiB/10176msec) 00:26:07.475 slat (usec): min=6, max=346582, avg=3804.89, stdev=22762.75 00:26:07.475 clat (msec): min=15, max=1150, avg=281.10, stdev=268.21 00:26:07.475 lat (msec): min=15, max=1150, avg=284.90, stdev=272.22 00:26:07.475 clat percentiles (msec): 00:26:07.475 | 1.00th=[ 22], 5.00th=[ 29], 10.00th=[ 46], 20.00th=[ 80], 00:26:07.475 | 30.00th=[ 102], 40.00th=[ 142], 50.00th=[ 186], 60.00th=[ 226], 00:26:07.475 | 70.00th=[ 279], 80.00th=[ 518], 90.00th=[ 768], 95.00th=[ 860], 00:26:07.475 | 99.00th=[ 1020], 99.50th=[ 1036], 99.90th=[ 1150], 99.95th=[ 1150], 00:26:07.475 | 99.99th=[ 1150] 00:26:07.475 bw ( KiB/s): min=12263, max=238080, per=6.36%, avg=56793.55, stdev=56533.03, samples=20 00:26:07.475 iops : min= 47, max= 930, avg=221.80, stdev=220.86, samples=20 00:26:07.475 lat (msec) : 20=0.44%, 50=10.56%, 100=18.83%, 250=35.79%, 500=14.15% 00:26:07.475 lat (msec) : 750=8.76%, 1000=10.07%, 2000=1.40% 00:26:07.475 cpu : usr=0.05%, sys=0.63%, ctx=431, majf=0, minf=4097 00:26:07.475 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:26:07.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.475 issued rwts: total=2283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.475 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.475 job1: (groupid=0, jobs=1): err= 0: pid=2324976: Fri Dec 6 16:52:54 2024 00:26:07.475 read: IOPS=445, BW=111MiB/s (117MB/s)(1124MiB/10083msec) 00:26:07.475 slat (usec): min=8, max=200547, avg=1987.32, stdev=8353.53 00:26:07.475 clat (msec): min=10, max=916, avg=141.39, stdev=149.80 00:26:07.475 lat (msec): min=11, max=916, avg=143.38, stdev=151.56 00:26:07.475 clat percentiles (msec): 00:26:07.475 | 1.00th=[ 27], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 35], 00:26:07.475 | 30.00th=[ 37], 40.00th=[ 41], 50.00th=[ 87], 60.00th=[ 124], 00:26:07.475 | 70.00th=[ 169], 80.00th=[ 226], 90.00th=[ 317], 95.00th=[ 481], 00:26:07.475 | 99.00th=[ 684], 99.50th=[ 768], 99.90th=[ 885], 99.95th=[ 885], 00:26:07.475 | 99.99th=[ 919] 00:26:07.475 bw ( KiB/s): min=23040, max=438784, per=12.70%, avg=113450.80, stdev=120717.00, samples=20 00:26:07.475 iops : min= 90, max= 1714, avg=443.15, stdev=471.56, samples=20 00:26:07.475 lat (msec) : 20=0.07%, 50=45.04%, 100=8.25%, 250=30.96%, 500=11.21% 00:26:07.475 lat (msec) : 750=3.91%, 1000=0.56% 00:26:07.475 cpu : usr=0.14%, sys=1.21%, ctx=783, majf=0, minf=4097 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=4496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.476 job2: (groupid=0, jobs=1): err= 0: pid=2324977: Fri Dec 6 16:52:54 2024 00:26:07.476 read: IOPS=640, BW=160MiB/s (168MB/s)(1627MiB/10160msec) 00:26:07.476 slat (usec): min=7, max=475734, avg=1360.67, stdev=8926.26 00:26:07.476 clat (msec): min=2, max=832, avg=98.46, stdev=127.59 00:26:07.476 lat (msec): min=2, max=1033, avg=99.82, stdev=129.22 00:26:07.476 clat percentiles (msec): 00:26:07.476 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 42], 00:26:07.476 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 47], 60.00th=[ 56], 00:26:07.476 | 70.00th=[ 70], 80.00th=[ 94], 90.00th=[ 255], 95.00th=[ 401], 00:26:07.476 | 99.00th=[ 642], 99.50th=[ 760], 99.90th=[ 793], 99.95th=[ 793], 00:26:07.476 | 99.99th=[ 835] 00:26:07.476 bw ( KiB/s): min=17920, max=388096, per=18.47%, avg=164989.50, stdev=132053.16, samples=20 00:26:07.476 iops : min= 70, max= 1516, avg=644.45, stdev=515.86, samples=20 00:26:07.476 lat (msec) : 4=0.03%, 10=0.15%, 20=2.50%, 50=52.93%, 100=25.52% 00:26:07.476 lat (msec) : 250=8.48%, 500=7.56%, 750=2.04%, 1000=0.78% 00:26:07.476 cpu : usr=0.14%, sys=1.45%, ctx=1360, majf=0, minf=4097 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=6509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.476 job3: (groupid=0, jobs=1): err= 0: pid=2324978: Fri Dec 6 16:52:54 2024 00:26:07.476 read: IOPS=375, BW=93.9MiB/s (98.5MB/s)(947MiB/10089msec) 00:26:07.476 slat (usec): min=5, max=394032, avg=2304.24, stdev=13328.11 00:26:07.476 clat (usec): min=1143, max=960387, avg=167993.30, stdev=181896.33 00:26:07.476 lat (usec): min=1181, max=960409, avg=170297.53, stdev=184300.39 00:26:07.476 clat percentiles (msec): 00:26:07.476 | 1.00th=[ 16], 5.00th=[ 21], 10.00th=[ 46], 20.00th=[ 51], 00:26:07.476 | 30.00th=[ 53], 40.00th=[ 57], 50.00th=[ 126], 60.00th=[ 144], 00:26:07.476 | 70.00th=[ 159], 80.00th=[ 211], 90.00th=[ 472], 95.00th=[ 625], 00:26:07.476 | 99.00th=[ 852], 99.50th=[ 869], 99.90th=[ 927], 99.95th=[ 961], 00:26:07.476 | 99.99th=[ 961] 00:26:07.476 bw ( KiB/s): min=12288, max=316416, per=10.68%, avg=95379.95, stdev=93541.92, samples=20 00:26:07.476 iops : min= 48, max= 1236, avg=372.50, stdev=365.46, samples=20 00:26:07.476 lat (msec) : 2=0.29%, 4=0.13%, 10=0.24%, 20=3.83%, 50=15.33% 00:26:07.476 lat (msec) : 100=23.70%, 250=40.01%, 500=7.60%, 750=6.62%, 1000=2.24% 00:26:07.476 cpu : usr=0.14%, sys=1.14%, ctx=803, majf=0, minf=3535 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=3789,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.476 job4: (groupid=0, jobs=1): err= 0: pid=2324979: Fri Dec 6 16:52:54 2024 00:26:07.476 read: IOPS=154, BW=38.5MiB/s (40.4MB/s)(389MiB/10085msec) 00:26:07.476 slat (usec): min=5, max=461603, avg=5365.89, stdev=26546.46 00:26:07.476 clat (msec): min=4, max=989, avg=409.07, stdev=312.01 00:26:07.476 lat (msec): min=4, max=1035, avg=414.44, stdev=315.59 00:26:07.476 clat percentiles (msec): 00:26:07.476 | 1.00th=[ 8], 5.00th=[ 14], 10.00th=[ 20], 20.00th=[ 24], 00:26:07.476 | 30.00th=[ 44], 40.00th=[ 300], 50.00th=[ 435], 60.00th=[ 550], 00:26:07.476 | 70.00th=[ 651], 80.00th=[ 735], 90.00th=[ 810], 95.00th=[ 869], 00:26:07.476 | 99.00th=[ 936], 99.50th=[ 961], 99.90th=[ 969], 99.95th=[ 986], 00:26:07.476 | 99.99th=[ 986] 00:26:07.476 bw ( KiB/s): min= 8192, max=262144, per=4.28%, avg=38192.65, stdev=53451.02, samples=20 00:26:07.476 iops : min= 32, max= 1024, avg=149.10, stdev=208.81, samples=20 00:26:07.476 lat (msec) : 10=2.70%, 20=9.58%, 50=18.26%, 100=0.19%, 250=5.40% 00:26:07.476 lat (msec) : 500=18.14%, 750=29.00%, 1000=16.72% 00:26:07.476 cpu : usr=0.02%, sys=0.47%, ctx=290, majf=0, minf=4097 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=1555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.476 job5: (groupid=0, jobs=1): err= 0: pid=2324980: Fri Dec 6 16:52:54 2024 00:26:07.476 read: IOPS=366, BW=91.7MiB/s (96.1MB/s)(931MiB/10155msec) 00:26:07.476 slat (usec): min=6, max=433277, avg=2019.36, stdev=14889.53 00:26:07.476 clat (msec): min=2, max=1338, avg=172.38, stdev=248.39 00:26:07.476 lat (msec): min=2, max=1338, avg=174.40, stdev=251.16 00:26:07.476 clat percentiles (msec): 00:26:07.476 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 12], 20.00th=[ 20], 00:26:07.476 | 30.00th=[ 28], 40.00th=[ 45], 50.00th=[ 72], 60.00th=[ 134], 00:26:07.476 | 70.00th=[ 157], 80.00th=[ 203], 90.00th=[ 575], 95.00th=[ 760], 00:26:07.476 | 99.00th=[ 1116], 99.50th=[ 1250], 99.90th=[ 1318], 99.95th=[ 1334], 00:26:07.476 | 99.99th=[ 1334] 00:26:07.476 bw ( KiB/s): min= 6144, max=349696, per=10.49%, avg=93672.50, stdev=93267.03, samples=20 00:26:07.476 iops : min= 24, max= 1366, avg=365.90, stdev=364.31, samples=20 00:26:07.476 lat (msec) : 4=1.64%, 10=5.99%, 20=12.67%, 50=23.98%, 100=9.24% 00:26:07.476 lat (msec) : 250=29.75%, 500=5.75%, 750=5.24%, 1000=4.03%, 2000=1.72% 00:26:07.476 cpu : usr=0.16%, sys=1.00%, ctx=1385, majf=0, minf=4097 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=3724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.476 job6: (groupid=0, jobs=1): err= 0: pid=2324981: Fri Dec 6 16:52:54 2024 00:26:07.476 read: IOPS=265, BW=66.3MiB/s (69.5MB/s)(674MiB/10160msec) 00:26:07.476 slat (usec): min=8, max=478957, avg=3507.12, stdev=20743.35 00:26:07.476 clat (msec): min=15, max=1363, avg=237.55, stdev=250.60 00:26:07.476 lat (msec): min=15, max=1363, avg=241.06, stdev=253.97 00:26:07.476 clat percentiles (msec): 00:26:07.476 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 57], 00:26:07.476 | 30.00th=[ 92], 40.00th=[ 126], 50.00th=[ 161], 60.00th=[ 186], 00:26:07.476 | 70.00th=[ 209], 80.00th=[ 347], 90.00th=[ 684], 95.00th=[ 835], 00:26:07.476 | 99.00th=[ 1183], 99.50th=[ 1183], 99.90th=[ 1200], 99.95th=[ 1368], 00:26:07.476 | 99.99th=[ 1368] 00:26:07.476 bw ( KiB/s): min=13312, max=247808, per=7.54%, avg=67353.60, stdev=59962.93, samples=20 00:26:07.476 iops : min= 52, max= 968, avg=263.10, stdev=234.23, samples=20 00:26:07.476 lat (msec) : 20=0.26%, 50=15.58%, 100=15.92%, 250=45.16%, 500=8.01% 00:26:07.476 lat (msec) : 750=8.13%, 1000=5.45%, 2000=1.48% 00:26:07.476 cpu : usr=0.08%, sys=0.70%, ctx=453, majf=0, minf=4097 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=2695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.476 job7: (groupid=0, jobs=1): err= 0: pid=2324982: Fri Dec 6 16:52:54 2024 00:26:07.476 read: IOPS=424, BW=106MiB/s (111MB/s)(1080MiB/10173msec) 00:26:07.476 slat (usec): min=5, max=567987, avg=1856.78, stdev=17750.74 00:26:07.476 clat (usec): min=1526, max=1335.3k, avg=148711.45, stdev=236492.75 00:26:07.476 lat (usec): min=1576, max=1335.3k, avg=150568.23, stdev=238993.33 00:26:07.476 clat percentiles (msec): 00:26:07.476 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 26], 20.00th=[ 37], 00:26:07.476 | 30.00th=[ 42], 40.00th=[ 54], 50.00th=[ 72], 60.00th=[ 80], 00:26:07.476 | 70.00th=[ 92], 80.00th=[ 165], 90.00th=[ 359], 95.00th=[ 776], 00:26:07.476 | 99.00th=[ 1267], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:26:07.476 | 99.99th=[ 1334] 00:26:07.476 bw ( KiB/s): min= 1536, max=400384, per=12.19%, avg=108895.70, stdev=106444.76, samples=20 00:26:07.476 iops : min= 6, max= 1564, avg=425.30, stdev=415.86, samples=20 00:26:07.476 lat (msec) : 2=0.19%, 4=0.93%, 10=3.47%, 20=1.76%, 50=32.52% 00:26:07.476 lat (msec) : 100=36.50%, 250=9.33%, 500=8.04%, 750=2.13%, 1000=3.10% 00:26:07.476 lat (msec) : 2000=2.04% 00:26:07.476 cpu : usr=0.13%, sys=1.25%, ctx=999, majf=0, minf=4097 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=4318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.476 job8: (groupid=0, jobs=1): err= 0: pid=2324983: Fri Dec 6 16:52:54 2024 00:26:07.476 read: IOPS=301, BW=75.4MiB/s (79.1MB/s)(767MiB/10169msec) 00:26:07.476 slat (usec): min=5, max=474635, avg=2485.41, stdev=15755.32 00:26:07.476 clat (msec): min=7, max=1075, avg=209.47, stdev=218.01 00:26:07.476 lat (msec): min=7, max=1075, avg=211.96, stdev=220.63 00:26:07.476 clat percentiles (msec): 00:26:07.476 | 1.00th=[ 17], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 34], 00:26:07.476 | 30.00th=[ 77], 40.00th=[ 114], 50.00th=[ 130], 60.00th=[ 178], 00:26:07.476 | 70.00th=[ 239], 80.00th=[ 326], 90.00th=[ 514], 95.00th=[ 768], 00:26:07.476 | 99.00th=[ 919], 99.50th=[ 986], 99.90th=[ 1083], 99.95th=[ 1083], 00:26:07.476 | 99.99th=[ 1083] 00:26:07.476 bw ( KiB/s): min= 8704, max=409600, per=8.61%, avg=76902.40, stdev=86466.92, samples=20 00:26:07.476 iops : min= 34, max= 1600, avg=300.40, stdev=337.76, samples=20 00:26:07.476 lat (msec) : 10=0.03%, 20=1.99%, 50=25.88%, 100=6.49%, 250=37.58% 00:26:07.476 lat (msec) : 500=17.70%, 750=5.08%, 1000=4.86%, 2000=0.39% 00:26:07.476 cpu : usr=0.09%, sys=0.86%, ctx=538, majf=0, minf=4098 00:26:07.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:07.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.476 issued rwts: total=3068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.476 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.477 job9: (groupid=0, jobs=1): err= 0: pid=2324984: Fri Dec 6 16:52:54 2024 00:26:07.477 read: IOPS=145, BW=36.3MiB/s (38.0MB/s)(369MiB/10163msec) 00:26:07.477 slat (usec): min=7, max=348432, avg=5593.19, stdev=27752.92 00:26:07.477 clat (msec): min=20, max=1061, avg=434.93, stdev=283.06 00:26:07.477 lat (msec): min=20, max=1185, avg=440.53, stdev=286.60 00:26:07.477 clat percentiles (msec): 00:26:07.477 | 1.00th=[ 28], 5.00th=[ 46], 10.00th=[ 67], 20.00th=[ 123], 00:26:07.477 | 30.00th=[ 199], 40.00th=[ 300], 50.00th=[ 439], 60.00th=[ 550], 00:26:07.477 | 70.00th=[ 634], 80.00th=[ 726], 90.00th=[ 818], 95.00th=[ 902], 00:26:07.477 | 99.00th=[ 953], 99.50th=[ 969], 99.90th=[ 1062], 99.95th=[ 1062], 00:26:07.477 | 99.99th=[ 1062] 00:26:07.477 bw ( KiB/s): min= 9728, max=114176, per=4.04%, avg=36119.15, stdev=28675.24, samples=20 00:26:07.477 iops : min= 38, max= 446, avg=141.05, stdev=112.03, samples=20 00:26:07.477 lat (msec) : 50=6.17%, 100=9.56%, 250=20.68%, 500=18.64%, 750=26.51% 00:26:07.477 lat (msec) : 1000=18.17%, 2000=0.27% 00:26:07.477 cpu : usr=0.03%, sys=0.44%, ctx=255, majf=0, minf=4097 00:26:07.477 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:26:07.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.477 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.477 issued rwts: total=1475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.477 job10: (groupid=0, jobs=1): err= 0: pid=2324985: Fri Dec 6 16:52:54 2024 00:26:07.477 read: IOPS=157, BW=39.4MiB/s (41.3MB/s)(397MiB/10084msec) 00:26:07.477 slat (usec): min=6, max=455006, avg=5096.37, stdev=24375.17 00:26:07.477 clat (usec): min=805, max=1124.1k, avg=400796.03, stdev=281574.74 00:26:07.477 lat (usec): min=824, max=1124.1k, avg=405892.40, stdev=285390.26 00:26:07.477 clat percentiles (usec): 00:26:07.477 | 1.00th=[ 1029], 5.00th=[ 1483], 10.00th=[ 2089], 00:26:07.477 | 20.00th=[ 160433], 30.00th=[ 212861], 40.00th=[ 248513], 00:26:07.477 | 50.00th=[ 367002], 60.00th=[ 471860], 70.00th=[ 566232], 00:26:07.477 | 80.00th=[ 675283], 90.00th=[ 817890], 95.00th=[ 893387], 00:26:07.477 | 99.00th=[1027605], 99.50th=[1082131], 99.90th=[1115685], 00:26:07.477 | 99.95th=[1115685], 99.99th=[1115685] 00:26:07.477 bw ( KiB/s): min=12288, max=134144, per=4.37%, avg=39008.75, stdev=28713.05, samples=20 00:26:07.477 iops : min= 48, max= 524, avg=152.30, stdev=112.20, samples=20 00:26:07.477 lat (usec) : 1000=0.76% 00:26:07.477 lat (msec) : 2=9.01%, 4=0.88%, 10=0.31%, 20=0.13%, 50=0.57% 00:26:07.477 lat (msec) : 100=3.90%, 250=24.69%, 500=23.87%, 750=22.10%, 1000=12.66% 00:26:07.477 lat (msec) : 2000=1.13% 00:26:07.477 cpu : usr=0.01%, sys=0.47%, ctx=415, majf=0, minf=4097 00:26:07.477 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:26:07.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:07.477 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:07.477 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:07.477 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:07.477 00:26:07.477 Run status group 0 (all jobs): 00:26:07.477 READ: bw=872MiB/s (915MB/s), 36.3MiB/s-160MiB/s (38.0MB/s-168MB/s), io=8875MiB (9306MB), run=10083-10176msec 00:26:07.477 00:26:07.477 Disk stats (read/write): 00:26:07.477 nvme0n1: ios=4446/0, merge=0/0, ticks=1237623/0, in_queue=1237623, util=96.77% 00:26:07.477 nvme10n1: ios=8823/0, merge=0/0, ticks=1200531/0, in_queue=1200531, util=96.95% 00:26:07.477 nvme1n1: ios=12931/0, merge=0/0, ticks=1241976/0, in_queue=1241976, util=97.41% 00:26:07.477 nvme2n1: ios=7422/0, merge=0/0, ticks=1209434/0, in_queue=1209434, util=97.59% 00:26:07.477 nvme3n1: ios=2942/0, merge=0/0, ticks=1206646/0, in_queue=1206646, util=97.65% 00:26:07.477 nvme4n1: ios=7372/0, merge=0/0, ticks=1247795/0, in_queue=1247795, util=98.15% 00:26:07.477 nvme5n1: ios=5343/0, merge=0/0, ticks=1256271/0, in_queue=1256271, util=98.36% 00:26:07.477 nvme6n1: ios=8519/0, merge=0/0, ticks=1228775/0, in_queue=1228775, util=98.57% 00:26:07.477 nvme7n1: ios=6062/0, merge=0/0, ticks=1259853/0, in_queue=1259853, util=98.92% 00:26:07.477 nvme8n1: ios=2911/0, merge=0/0, ticks=1254624/0, in_queue=1254624, util=99.05% 00:26:07.477 nvme9n1: ios=2967/0, merge=0/0, ticks=1210749/0, in_queue=1210749, util=99.20% 00:26:07.477 16:52:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:07.477 [global] 00:26:07.477 thread=1 00:26:07.477 invalidate=1 00:26:07.477 rw=randwrite 00:26:07.477 time_based=1 00:26:07.477 runtime=10 00:26:07.477 ioengine=libaio 00:26:07.477 direct=1 00:26:07.477 bs=262144 00:26:07.477 iodepth=64 00:26:07.477 norandommap=1 00:26:07.477 numjobs=1 00:26:07.477 00:26:07.477 [job0] 00:26:07.477 filename=/dev/nvme0n1 00:26:07.477 [job1] 00:26:07.477 filename=/dev/nvme10n1 00:26:07.477 [job2] 00:26:07.477 filename=/dev/nvme1n1 00:26:07.477 [job3] 00:26:07.477 filename=/dev/nvme2n1 00:26:07.477 [job4] 00:26:07.477 filename=/dev/nvme3n1 00:26:07.477 [job5] 00:26:07.477 filename=/dev/nvme4n1 00:26:07.477 [job6] 00:26:07.477 filename=/dev/nvme5n1 00:26:07.477 [job7] 00:26:07.477 filename=/dev/nvme6n1 00:26:07.477 [job8] 00:26:07.477 filename=/dev/nvme7n1 00:26:07.477 [job9] 00:26:07.477 filename=/dev/nvme8n1 00:26:07.477 [job10] 00:26:07.477 filename=/dev/nvme9n1 00:26:07.477 Could not set queue depth (nvme0n1) 00:26:07.477 Could not set queue depth (nvme10n1) 00:26:07.477 Could not set queue depth (nvme1n1) 00:26:07.477 Could not set queue depth (nvme2n1) 00:26:07.477 Could not set queue depth (nvme3n1) 00:26:07.477 Could not set queue depth (nvme4n1) 00:26:07.477 Could not set queue depth (nvme5n1) 00:26:07.477 Could not set queue depth (nvme6n1) 00:26:07.477 Could not set queue depth (nvme7n1) 00:26:07.477 Could not set queue depth (nvme8n1) 00:26:07.477 Could not set queue depth (nvme9n1) 00:26:07.477 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.477 fio-3.35 00:26:07.477 Starting 11 threads 00:26:17.454 00:26:17.454 job0: (groupid=0, jobs=1): err= 0: pid=2327130: Fri Dec 6 16:53:05 2024 00:26:17.454 write: IOPS=623, BW=156MiB/s (163MB/s)(1568MiB/10069msec); 0 zone resets 00:26:17.454 slat (usec): min=12, max=98690, avg=1540.35, stdev=3672.37 00:26:17.454 clat (usec): min=1769, max=345884, avg=101163.26, stdev=64020.21 00:26:17.454 lat (msec): min=2, max=345, avg=102.70, stdev=64.89 00:26:17.454 clat percentiles (msec): 00:26:17.454 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 48], 00:26:17.454 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 90], 60.00th=[ 101], 00:26:17.454 | 70.00th=[ 113], 80.00th=[ 155], 90.00th=[ 199], 95.00th=[ 211], 00:26:17.454 | 99.00th=[ 305], 99.50th=[ 326], 99.90th=[ 342], 99.95th=[ 347], 00:26:17.454 | 99.99th=[ 347] 00:26:17.454 bw ( KiB/s): min=49152, max=349184, per=11.37%, avg=158976.00, stdev=94780.43, samples=20 00:26:17.454 iops : min= 192, max= 1364, avg=621.00, stdev=370.24, samples=20 00:26:17.454 lat (msec) : 2=0.02%, 4=0.16%, 10=0.33%, 20=0.06%, 50=32.65% 00:26:17.454 lat (msec) : 100=26.48%, 250=37.14%, 500=3.16% 00:26:17.454 cpu : usr=1.25%, sys=1.68%, ctx=1649, majf=0, minf=1 00:26:17.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:17.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.454 issued rwts: total=0,6273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.454 job1: (groupid=0, jobs=1): err= 0: pid=2327153: Fri Dec 6 16:53:05 2024 00:26:17.454 write: IOPS=364, BW=91.2MiB/s (95.6MB/s)(923MiB/10119msec); 0 zone resets 00:26:17.454 slat (usec): min=12, max=116082, avg=2363.09, stdev=5737.64 00:26:17.454 clat (usec): min=1512, max=531965, avg=172982.35, stdev=84761.42 00:26:17.454 lat (msec): min=2, max=538, avg=175.35, stdev=85.84 00:26:17.454 clat percentiles (msec): 00:26:17.454 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 59], 20.00th=[ 90], 00:26:17.454 | 30.00th=[ 146], 40.00th=[ 169], 50.00th=[ 188], 60.00th=[ 199], 00:26:17.454 | 70.00th=[ 209], 80.00th=[ 228], 90.00th=[ 253], 95.00th=[ 271], 00:26:17.454 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 527], 99.95th=[ 527], 00:26:17.454 | 99.99th=[ 531] 00:26:17.454 bw ( KiB/s): min=34816, max=230912, per=6.65%, avg=92902.40, stdev=43166.98, samples=20 00:26:17.454 iops : min= 136, max= 902, avg=362.90, stdev=168.62, samples=20 00:26:17.454 lat (msec) : 2=0.03%, 4=0.41%, 10=1.79%, 20=1.76%, 50=5.09% 00:26:17.454 lat (msec) : 100=12.30%, 250=67.61%, 500=10.13%, 750=0.89% 00:26:17.454 cpu : usr=0.77%, sys=0.89%, ctx=1527, majf=0, minf=1 00:26:17.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:17.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.454 issued rwts: total=0,3692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.454 job2: (groupid=0, jobs=1): err= 0: pid=2327179: Fri Dec 6 16:53:05 2024 00:26:17.454 write: IOPS=551, BW=138MiB/s (145MB/s)(1390MiB/10076msec); 0 zone resets 00:26:17.454 slat (usec): min=11, max=32556, avg=1625.46, stdev=3756.99 00:26:17.454 clat (msec): min=15, max=438, avg=114.30, stdev=73.52 00:26:17.454 lat (msec): min=16, max=438, avg=115.93, stdev=74.50 00:26:17.454 clat percentiles (msec): 00:26:17.454 | 1.00th=[ 34], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 72], 00:26:17.454 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 94], 60.00th=[ 106], 00:26:17.454 | 70.00th=[ 120], 80.00th=[ 157], 90.00th=[ 203], 95.00th=[ 284], 00:26:17.454 | 99.00th=[ 401], 99.50th=[ 422], 99.90th=[ 439], 99.95th=[ 439], 00:26:17.454 | 99.99th=[ 439] 00:26:17.454 bw ( KiB/s): min=38912, max=406528, per=10.07%, avg=140763.20, stdev=80729.96, samples=20 00:26:17.454 iops : min= 152, max= 1588, avg=549.85, stdev=315.35, samples=20 00:26:17.454 lat (msec) : 20=0.14%, 50=15.18%, 100=38.39%, 250=40.14%, 500=6.15% 00:26:17.454 cpu : usr=0.93%, sys=1.09%, ctx=1706, majf=0, minf=1 00:26:17.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:17.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.454 issued rwts: total=0,5561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.454 job3: (groupid=0, jobs=1): err= 0: pid=2327197: Fri Dec 6 16:53:05 2024 00:26:17.454 write: IOPS=632, BW=158MiB/s (166MB/s)(1592MiB/10066msec); 0 zone resets 00:26:17.454 slat (usec): min=11, max=20096, avg=1535.78, stdev=3086.09 00:26:17.454 clat (msec): min=7, max=229, avg=99.59, stdev=47.57 00:26:17.454 lat (msec): min=8, max=229, avg=101.13, stdev=48.24 00:26:17.454 clat percentiles (msec): 00:26:17.454 | 1.00th=[ 25], 5.00th=[ 37], 10.00th=[ 42], 20.00th=[ 47], 00:26:17.454 | 30.00th=[ 65], 40.00th=[ 86], 50.00th=[ 106], 60.00th=[ 115], 00:26:17.454 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 171], 95.00th=[ 194], 00:26:17.454 | 99.00th=[ 211], 99.50th=[ 213], 99.90th=[ 230], 99.95th=[ 230], 00:26:17.454 | 99.99th=[ 230] 00:26:17.454 bw ( KiB/s): min=83968, max=382464, per=11.55%, avg=161433.60, stdev=75770.56, samples=20 00:26:17.454 iops : min= 328, max= 1494, avg=630.60, stdev=295.98, samples=20 00:26:17.454 lat (msec) : 10=0.03%, 20=0.31%, 50=21.42%, 100=25.14%, 250=53.10% 00:26:17.454 cpu : usr=1.09%, sys=1.16%, ctx=1729, majf=0, minf=1 00:26:17.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:17.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.454 issued rwts: total=0,6369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.454 job4: (groupid=0, jobs=1): err= 0: pid=2327207: Fri Dec 6 16:53:05 2024 00:26:17.454 write: IOPS=386, BW=96.7MiB/s (101MB/s)(975MiB/10078msec); 0 zone resets 00:26:17.454 slat (usec): min=17, max=40273, avg=2349.37, stdev=5083.55 00:26:17.454 clat (msec): min=17, max=441, avg=163.00, stdev=84.46 00:26:17.454 lat (msec): min=17, max=441, avg=165.35, stdev=85.51 00:26:17.454 clat percentiles (msec): 00:26:17.454 | 1.00th=[ 65], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 86], 00:26:17.454 | 30.00th=[ 109], 40.00th=[ 116], 50.00th=[ 122], 60.00th=[ 157], 00:26:17.454 | 70.00th=[ 205], 80.00th=[ 251], 90.00th=[ 292], 95.00th=[ 321], 00:26:17.454 | 99.00th=[ 397], 99.50th=[ 418], 99.90th=[ 435], 99.95th=[ 439], 00:26:17.454 | 99.99th=[ 443] 00:26:17.454 bw ( KiB/s): min=40960, max=198144, per=7.02%, avg=98201.60, stdev=46160.95, samples=20 00:26:17.454 iops : min= 160, max= 774, avg=383.60, stdev=180.32, samples=20 00:26:17.454 lat (msec) : 20=0.10%, 50=0.23%, 100=26.93%, 250=52.17%, 500=20.57% 00:26:17.454 cpu : usr=0.96%, sys=0.84%, ctx=1240, majf=0, minf=1 00:26:17.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:17.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.454 issued rwts: total=0,3899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.454 job5: (groupid=0, jobs=1): err= 0: pid=2327239: Fri Dec 6 16:53:05 2024 00:26:17.454 write: IOPS=502, BW=126MiB/s (132MB/s)(1266MiB/10076msec); 0 zone resets 00:26:17.454 slat (usec): min=14, max=176704, avg=1799.63, stdev=4903.87 00:26:17.454 clat (msec): min=7, max=557, avg=125.49, stdev=62.66 00:26:17.454 lat (msec): min=7, max=579, avg=127.28, stdev=63.34 00:26:17.454 clat percentiles (msec): 00:26:17.454 | 1.00th=[ 14], 5.00th=[ 43], 10.00th=[ 62], 20.00th=[ 89], 00:26:17.454 | 30.00th=[ 107], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 122], 00:26:17.454 | 70.00th=[ 132], 80.00th=[ 159], 90.00th=[ 194], 95.00th=[ 211], 00:26:17.454 | 99.00th=[ 414], 99.50th=[ 464], 99.90th=[ 531], 99.95th=[ 558], 00:26:17.454 | 99.99th=[ 558] 00:26:17.454 bw ( KiB/s): min=82432, max=221696, per=9.16%, avg=128051.20, stdev=36240.01, samples=20 00:26:17.454 iops : min= 322, max= 866, avg=500.20, stdev=141.56, samples=20 00:26:17.454 lat (msec) : 10=0.28%, 20=1.74%, 50=3.89%, 100=19.80%, 250=71.41% 00:26:17.454 lat (msec) : 500=2.67%, 750=0.22% 00:26:17.454 cpu : usr=0.84%, sys=1.24%, ctx=1787, majf=0, minf=1 00:26:17.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:17.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.455 issued rwts: total=0,5065,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.455 job6: (groupid=0, jobs=1): err= 0: pid=2327255: Fri Dec 6 16:53:05 2024 00:26:17.455 write: IOPS=626, BW=157MiB/s (164MB/s)(1576MiB/10058msec); 0 zone resets 00:26:17.455 slat (usec): min=17, max=42825, avg=1578.56, stdev=3154.21 00:26:17.455 clat (msec): min=44, max=282, avg=100.51, stdev=47.57 00:26:17.455 lat (msec): min=44, max=282, avg=102.09, stdev=48.23 00:26:17.455 clat percentiles (msec): 00:26:17.455 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 72], 00:26:17.455 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 86], 00:26:17.455 | 70.00th=[ 93], 80.00th=[ 125], 90.00th=[ 176], 95.00th=[ 220], 00:26:17.455 | 99.00th=[ 255], 99.50th=[ 268], 99.90th=[ 279], 99.95th=[ 284], 00:26:17.455 | 99.99th=[ 284] 00:26:17.455 bw ( KiB/s): min=64000, max=249856, per=11.43%, avg=159769.60, stdev=58655.55, samples=20 00:26:17.455 iops : min= 250, max= 976, avg=624.10, stdev=229.12, samples=20 00:26:17.455 lat (msec) : 50=0.17%, 100=77.22%, 250=21.22%, 500=1.38% 00:26:17.455 cpu : usr=1.38%, sys=1.75%, ctx=1561, majf=0, minf=1 00:26:17.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:17.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.455 issued rwts: total=0,6304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.455 job7: (groupid=0, jobs=1): err= 0: pid=2327269: Fri Dec 6 16:53:05 2024 00:26:17.455 write: IOPS=465, BW=116MiB/s (122MB/s)(1176MiB/10114msec); 0 zone resets 00:26:17.455 slat (usec): min=11, max=92578, avg=1788.10, stdev=4572.06 00:26:17.455 clat (msec): min=3, max=363, avg=135.78, stdev=84.70 00:26:17.455 lat (msec): min=3, max=365, avg=137.56, stdev=85.79 00:26:17.455 clat percentiles (msec): 00:26:17.455 | 1.00th=[ 12], 5.00th=[ 34], 10.00th=[ 57], 20.00th=[ 81], 00:26:17.455 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 89], 60.00th=[ 97], 00:26:17.455 | 70.00th=[ 186], 80.00th=[ 239], 90.00th=[ 268], 95.00th=[ 292], 00:26:17.455 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 359], 00:26:17.455 | 99.99th=[ 363] 00:26:17.455 bw ( KiB/s): min=51200, max=257536, per=8.50%, avg=118817.15, stdev=62220.05, samples=20 00:26:17.455 iops : min= 200, max= 1006, avg=464.10, stdev=243.07, samples=20 00:26:17.455 lat (msec) : 4=0.02%, 10=0.79%, 20=1.49%, 50=5.25%, 100=54.63% 00:26:17.455 lat (msec) : 250=21.98%, 500=15.84% 00:26:17.455 cpu : usr=0.87%, sys=0.94%, ctx=1908, majf=0, minf=1 00:26:17.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:17.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.455 issued rwts: total=0,4704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.455 job8: (groupid=0, jobs=1): err= 0: pid=2327301: Fri Dec 6 16:53:05 2024 00:26:17.455 write: IOPS=417, BW=104MiB/s (109MB/s)(1055MiB/10119msec); 0 zone resets 00:26:17.455 slat (usec): min=16, max=113631, avg=2051.97, stdev=4908.86 00:26:17.455 clat (msec): min=13, max=472, avg=151.37, stdev=74.47 00:26:17.455 lat (msec): min=13, max=472, avg=153.42, stdev=75.41 00:26:17.455 clat percentiles (msec): 00:26:17.455 | 1.00th=[ 20], 5.00th=[ 44], 10.00th=[ 102], 20.00th=[ 110], 00:26:17.455 | 30.00th=[ 115], 40.00th=[ 118], 50.00th=[ 125], 60.00th=[ 136], 00:26:17.455 | 70.00th=[ 157], 80.00th=[ 205], 90.00th=[ 268], 95.00th=[ 305], 00:26:17.455 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 435], 00:26:17.455 | 99.99th=[ 472] 00:26:17.455 bw ( KiB/s): min=45056, max=168960, per=7.61%, avg=106393.60, stdev=34738.88, samples=20 00:26:17.455 iops : min= 176, max= 660, avg=415.60, stdev=135.70, samples=20 00:26:17.455 lat (msec) : 20=1.16%, 50=4.74%, 100=3.29%, 250=79.34%, 500=11.47% 00:26:17.455 cpu : usr=0.88%, sys=1.02%, ctx=1601, majf=0, minf=1 00:26:17.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:17.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.455 issued rwts: total=0,4220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.455 job9: (groupid=0, jobs=1): err= 0: pid=2327318: Fri Dec 6 16:53:05 2024 00:26:17.455 write: IOPS=452, BW=113MiB/s (119MB/s)(1139MiB/10062msec); 0 zone resets 00:26:17.455 slat (usec): min=13, max=117905, avg=1955.30, stdev=5442.07 00:26:17.455 clat (msec): min=2, max=494, avg=139.39, stdev=86.87 00:26:17.455 lat (msec): min=2, max=494, avg=141.34, stdev=88.09 00:26:17.455 clat percentiles (msec): 00:26:17.455 | 1.00th=[ 9], 5.00th=[ 35], 10.00th=[ 51], 20.00th=[ 83], 00:26:17.455 | 30.00th=[ 92], 40.00th=[ 99], 50.00th=[ 109], 60.00th=[ 121], 00:26:17.455 | 70.00th=[ 184], 80.00th=[ 199], 90.00th=[ 251], 95.00th=[ 321], 00:26:17.455 | 99.00th=[ 439], 99.50th=[ 464], 99.90th=[ 489], 99.95th=[ 493], 00:26:17.455 | 99.99th=[ 493] 00:26:17.455 bw ( KiB/s): min=40960, max=213504, per=8.22%, avg=114986.45, stdev=51497.56, samples=20 00:26:17.455 iops : min= 160, max= 834, avg=449.15, stdev=201.14, samples=20 00:26:17.455 lat (msec) : 4=0.04%, 10=1.25%, 20=1.56%, 50=7.16%, 100=32.35% 00:26:17.455 lat (msec) : 250=47.61%, 500=10.04% 00:26:17.455 cpu : usr=0.76%, sys=1.10%, ctx=1634, majf=0, minf=1 00:26:17.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:17.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.455 issued rwts: total=0,4554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.455 job10: (groupid=0, jobs=1): err= 0: pid=2327332: Fri Dec 6 16:53:05 2024 00:26:17.455 write: IOPS=456, BW=114MiB/s (120MB/s)(1155MiB/10112msec); 0 zone resets 00:26:17.455 slat (usec): min=17, max=144287, avg=2002.60, stdev=5302.85 00:26:17.455 clat (msec): min=35, max=507, avg=137.97, stdev=92.28 00:26:17.455 lat (msec): min=35, max=519, avg=139.97, stdev=93.39 00:26:17.455 clat percentiles (msec): 00:26:17.455 | 1.00th=[ 54], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 80], 00:26:17.455 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 90], 60.00th=[ 97], 00:26:17.455 | 70.00th=[ 163], 80.00th=[ 213], 90.00th=[ 271], 95.00th=[ 317], 00:26:17.455 | 99.00th=[ 472], 99.50th=[ 489], 99.90th=[ 502], 99.95th=[ 506], 00:26:17.455 | 99.99th=[ 510] 00:26:17.455 bw ( KiB/s): min=36864, max=254976, per=8.35%, avg=116684.80, stdev=66135.55, samples=20 00:26:17.455 iops : min= 144, max= 996, avg=455.80, stdev=258.34, samples=20 00:26:17.455 lat (msec) : 50=0.13%, 100=61.78%, 250=24.41%, 500=13.57%, 750=0.11% 00:26:17.455 cpu : usr=0.74%, sys=1.04%, ctx=1319, majf=0, minf=1 00:26:17.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:17.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:17.455 issued rwts: total=0,4621,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:17.455 00:26:17.455 Run status group 0 (all jobs): 00:26:17.455 WRITE: bw=1365MiB/s (1432MB/s), 91.2MiB/s-158MiB/s (95.6MB/s-166MB/s), io=13.5GiB (14.5GB), run=10058-10119msec 00:26:17.455 00:26:17.455 Disk stats (read/write): 00:26:17.455 nvme0n1: ios=51/12188, merge=0/0, ticks=2682/1192540, in_queue=1195222, util=100.00% 00:26:17.455 nvme10n1: ios=44/7341, merge=0/0, ticks=86/1233335, in_queue=1233421, util=97.10% 00:26:17.455 nvme1n1: ios=0/10841, merge=0/0, ticks=0/1196012, in_queue=1196012, util=96.97% 00:26:17.455 nvme2n1: ios=0/12273, merge=0/0, ticks=0/1195609, in_queue=1195609, util=97.20% 00:26:17.455 nvme3n1: ios=47/7508, merge=0/0, ticks=787/1195109, in_queue=1195896, util=100.00% 00:26:17.455 nvme4n1: ios=0/9846, merge=0/0, ticks=0/1196091, in_queue=1196091, util=97.76% 00:26:17.455 nvme5n1: ios=0/12205, merge=0/0, ticks=0/1200239, in_queue=1200239, util=97.98% 00:26:17.455 nvme6n1: ios=0/9377, merge=0/0, ticks=0/1235697, in_queue=1235697, util=98.20% 00:26:17.455 nvme7n1: ios=0/8396, merge=0/0, ticks=0/1232789, in_queue=1232789, util=98.74% 00:26:17.455 nvme8n1: ios=38/8737, merge=0/0, ticks=2795/1182332, in_queue=1185127, util=100.00% 00:26:17.455 nvme9n1: ios=41/9215, merge=0/0, ticks=2290/1225956, in_queue=1228246, util=100.00% 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:17.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:17.455 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.456 16:53:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:17.456 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.456 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:17.715 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.715 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:17.975 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.975 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:18.234 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.234 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:18.494 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.494 16:53:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:18.494 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.494 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:18.753 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:18.753 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:19.012 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:19.012 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.012 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:19.272 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.272 rmmod nvme_tcp 00:26:19.272 rmmod nvme_fabrics 00:26:19.272 rmmod nvme_keyring 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 2315202 ']' 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 2315202 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 2315202 ']' 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 2315202 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2315202 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2315202' 00:26:19.272 killing process with pid 2315202 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 2315202 00:26:19.272 16:53:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 2315202 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.531 16:53:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:22.069 00:26:22.069 real 1m14.253s 00:26:22.069 user 4m46.265s 00:26:22.069 sys 0m13.982s 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.069 ************************************ 00:26:22.069 END TEST nvmf_multiconnection 00:26:22.069 ************************************ 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:22.069 ************************************ 00:26:22.069 START TEST nvmf_initiator_timeout 00:26:22.069 ************************************ 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:22.069 * Looking for test storage... 00:26:22.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:22.069 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:22.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.070 --rc genhtml_branch_coverage=1 00:26:22.070 --rc genhtml_function_coverage=1 00:26:22.070 --rc genhtml_legend=1 00:26:22.070 --rc geninfo_all_blocks=1 00:26:22.070 --rc geninfo_unexecuted_blocks=1 00:26:22.070 00:26:22.070 ' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:22.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.070 --rc genhtml_branch_coverage=1 00:26:22.070 --rc genhtml_function_coverage=1 00:26:22.070 --rc genhtml_legend=1 00:26:22.070 --rc geninfo_all_blocks=1 00:26:22.070 --rc geninfo_unexecuted_blocks=1 00:26:22.070 00:26:22.070 ' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:22.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.070 --rc genhtml_branch_coverage=1 00:26:22.070 --rc genhtml_function_coverage=1 00:26:22.070 --rc genhtml_legend=1 00:26:22.070 --rc geninfo_all_blocks=1 00:26:22.070 --rc geninfo_unexecuted_blocks=1 00:26:22.070 00:26:22.070 ' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:22.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:22.070 --rc genhtml_branch_coverage=1 00:26:22.070 --rc genhtml_function_coverage=1 00:26:22.070 --rc genhtml_legend=1 00:26:22.070 --rc geninfo_all_blocks=1 00:26:22.070 --rc geninfo_unexecuted_blocks=1 00:26:22.070 00:26:22.070 ' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:22.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:22.070 16:53:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:27.338 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:27.338 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:27.338 Found net devices under 0000:31:00.0: cvl_0_0 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:27.338 Found net devices under 0000:31:00.1: cvl_0_1 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.338 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:27.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:26:27.339 00:26:27.339 --- 10.0.0.2 ping statistics --- 00:26:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.339 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:26:27.339 00:26:27.339 --- 10.0.0.1 ping statistics --- 00:26:27.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.339 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=2333742 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 2333742 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 2333742 ']' 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:27.339 [2024-12-06 16:53:15.722368] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:26:27.339 [2024-12-06 16:53:15.722406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.339 [2024-12-06 16:53:15.800419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.339 [2024-12-06 16:53:15.818385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.339 [2024-12-06 16:53:15.818420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.339 [2024-12-06 16:53:15.818428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.339 [2024-12-06 16:53:15.818435] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.339 [2024-12-06 16:53:15.818441] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.339 [2024-12-06 16:53:15.820136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.339 [2024-12-06 16:53:15.820172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.339 [2024-12-06 16:53:15.820229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.339 [2024-12-06 16:53:15.820229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 Malloc0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 Delay0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 [2024-12-06 16:53:15.956923] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:27.339 [2024-12-06 16:53:15.981819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.339 16:53:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:29.244 16:53:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:29.244 16:53:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:29.244 16:53:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.244 16:53:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:29.244 16:53:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2334538 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:31.149 16:53:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:31.149 [global] 00:26:31.149 thread=1 00:26:31.149 invalidate=1 00:26:31.149 rw=write 00:26:31.149 time_based=1 00:26:31.149 runtime=60 00:26:31.149 ioengine=libaio 00:26:31.149 direct=1 00:26:31.149 bs=4096 00:26:31.149 iodepth=1 00:26:31.149 norandommap=0 00:26:31.149 numjobs=1 00:26:31.149 00:26:31.149 verify_dump=1 00:26:31.149 verify_backlog=512 00:26:31.149 verify_state_save=0 00:26:31.149 do_verify=1 00:26:31.149 verify=crc32c-intel 00:26:31.149 [job0] 00:26:31.149 filename=/dev/nvme0n1 00:26:31.149 Could not set queue depth (nvme0n1) 00:26:31.408 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:31.408 fio-3.35 00:26:31.408 Starting 1 thread 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.943 true 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.943 true 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.943 true 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:33.943 true 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.943 16:53:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.235 true 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.235 true 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.235 true 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:37.235 true 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:37.235 16:53:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2334538 00:27:33.482 00:27:33.482 job0: (groupid=0, jobs=1): err= 0: pid=2334857: Fri Dec 6 16:54:20 2024 00:27:33.482 read: IOPS=198, BW=795KiB/s (814kB/s)(46.6MiB/60001msec) 00:27:33.482 slat (usec): min=2, max=11155, avg=17.73, stdev=115.65 00:27:33.482 clat (usec): min=200, max=41867k, avg=4499.10, stdev=383320.98 00:27:33.482 lat (usec): min=212, max=41867k, avg=4516.83, stdev=383321.06 00:27:33.482 clat percentiles (usec): 00:27:33.482 | 1.00th=[ 529], 5.00th=[ 693], 10.00th=[ 766], 20.00th=[ 840], 00:27:33.482 | 30.00th=[ 873], 40.00th=[ 898], 50.00th=[ 922], 60.00th=[ 938], 00:27:33.482 | 70.00th=[ 955], 80.00th=[ 971], 90.00th=[ 996], 95.00th=[ 1020], 00:27:33.482 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[42206], 99.95th=[42206], 00:27:33.482 | 99.99th=[42730] 00:27:33.482 write: IOPS=204, BW=819KiB/s (839kB/s)(48.0MiB/60001msec); 0 zone resets 00:27:33.482 slat (nsec): min=3356, max=69172, avg=15670.43, stdev=8738.30 00:27:33.482 clat (usec): min=137, max=1136, avg=473.81, stdev=104.36 00:27:33.482 lat (usec): min=148, max=1187, avg=489.48, stdev=107.65 00:27:33.482 clat percentiles (usec): 00:27:33.482 | 1.00th=[ 245], 5.00th=[ 297], 10.00th=[ 343], 20.00th=[ 383], 00:27:33.482 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 474], 60.00th=[ 494], 00:27:33.482 | 70.00th=[ 523], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 644], 00:27:33.482 | 99.00th=[ 742], 99.50th=[ 783], 99.90th=[ 857], 99.95th=[ 881], 00:27:33.482 | 99.99th=[ 1020] 00:27:33.482 bw ( KiB/s): min= 256, max= 4096, per=100.00%, avg=2854.79, stdev=1322.31, samples=33 00:27:33.482 iops : min= 64, max= 1024, avg=713.70, stdev=330.58, samples=33 00:27:33.482 lat (usec) : 250=0.72%, 500=31.44%, 750=22.18%, 1000=41.43% 00:27:33.482 lat (msec) : 2=4.12%, 50=0.11%, >=2000=0.01% 00:27:33.482 cpu : usr=0.46%, sys=1.00%, ctx=24220, majf=0, minf=1 00:27:33.482 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:33.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.482 issued rwts: total=11929,12288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.482 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:33.482 00:27:33.482 Run status group 0 (all jobs): 00:27:33.482 READ: bw=795KiB/s (814kB/s), 795KiB/s-795KiB/s (814kB/s-814kB/s), io=46.6MiB (48.9MB), run=60001-60001msec 00:27:33.482 WRITE: bw=819KiB/s (839kB/s), 819KiB/s-819KiB/s (839kB/s-839kB/s), io=48.0MiB (50.3MB), run=60001-60001msec 00:27:33.482 00:27:33.482 Disk stats (read/write): 00:27:33.482 nvme0n1: ios=11923/12288, merge=0/0, ticks=10789/5059, in_queue=15848, util=99.78% 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:33.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:33.483 nvmf hotplug test: fio successful as expected 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:33.483 rmmod nvme_tcp 00:27:33.483 rmmod nvme_fabrics 00:27:33.483 rmmod nvme_keyring 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 2333742 ']' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 2333742 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 2333742 ']' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 2333742 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2333742 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2333742' 00:27:33.483 killing process with pid 2333742 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 2333742 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 2333742 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.483 16:54:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.742 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:34.000 00:27:34.000 real 1m12.238s 00:27:34.000 user 4m30.532s 00:27:34.000 sys 0m6.403s 00:27:34.000 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.000 16:54:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:34.000 ************************************ 00:27:34.000 END TEST nvmf_initiator_timeout 00:27:34.000 ************************************ 00:27:34.000 16:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:34.000 16:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:34.000 16:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:34.000 16:54:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:34.000 16:54:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:39.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:39.281 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:39.281 Found net devices under 0000:31:00.0: cvl_0_0 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:39.281 Found net devices under 0000:31:00.1: cvl_0_1 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:39.281 ************************************ 00:27:39.281 START TEST nvmf_perf_adq 00:27:39.281 ************************************ 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:39.281 * Looking for test storage... 00:27:39.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:39.281 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.282 --rc genhtml_branch_coverage=1 00:27:39.282 --rc genhtml_function_coverage=1 00:27:39.282 --rc genhtml_legend=1 00:27:39.282 --rc geninfo_all_blocks=1 00:27:39.282 --rc geninfo_unexecuted_blocks=1 00:27:39.282 00:27:39.282 ' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.282 --rc genhtml_branch_coverage=1 00:27:39.282 --rc genhtml_function_coverage=1 00:27:39.282 --rc genhtml_legend=1 00:27:39.282 --rc geninfo_all_blocks=1 00:27:39.282 --rc geninfo_unexecuted_blocks=1 00:27:39.282 00:27:39.282 ' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.282 --rc genhtml_branch_coverage=1 00:27:39.282 --rc genhtml_function_coverage=1 00:27:39.282 --rc genhtml_legend=1 00:27:39.282 --rc geninfo_all_blocks=1 00:27:39.282 --rc geninfo_unexecuted_blocks=1 00:27:39.282 00:27:39.282 ' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.282 --rc genhtml_branch_coverage=1 00:27:39.282 --rc genhtml_function_coverage=1 00:27:39.282 --rc genhtml_legend=1 00:27:39.282 --rc geninfo_all_blocks=1 00:27:39.282 --rc geninfo_unexecuted_blocks=1 00:27:39.282 00:27:39.282 ' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.282 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.282 16:54:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.555 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:44.556 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:44.556 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:44.556 Found net devices under 0000:31:00.0: cvl_0_0 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:44.556 Found net devices under 0000:31:00.1: cvl_0_1 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:44.556 16:54:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:45.931 16:54:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:47.981 16:54:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:53.262 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:53.263 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:53.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:53.263 Found net devices under 0000:31:00.0: cvl_0_0 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:53.263 Found net devices under 0000:31:00.1: cvl_0_1 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:53.263 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:53.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:27:53.263 00:27:53.264 --- 10.0.0.2 ping statistics --- 00:27:53.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.264 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:27:53.264 00:27:53.264 --- 10.0.0.1 ping statistics --- 00:27:53.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.264 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2358447 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2358447 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2358447 ']' 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:53.264 [2024-12-06 16:54:41.641883] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:27:53.264 [2024-12-06 16:54:41.641936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.264 [2024-12-06 16:54:41.717021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.264 [2024-12-06 16:54:41.736014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.264 [2024-12-06 16:54:41.736050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.264 [2024-12-06 16:54:41.736056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.264 [2024-12-06 16:54:41.736061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.264 [2024-12-06 16:54:41.736066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.264 [2024-12-06 16:54:41.737424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.264 [2024-12-06 16:54:41.737641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.264 [2024-12-06 16:54:41.737800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.264 [2024-12-06 16:54:41.737800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.264 [2024-12-06 16:54:41.922150] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.264 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.525 Malloc1 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.525 [2024-12-06 16:54:41.978164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2358476 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:53.525 16:54:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:55.433 16:54:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:55.433 16:54:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.433 16:54:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:55.433 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.433 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:55.433 "tick_rate": 2400000000, 00:27:55.433 "poll_groups": [ 00:27:55.433 { 00:27:55.433 "name": "nvmf_tgt_poll_group_000", 00:27:55.433 "admin_qpairs": 1, 00:27:55.433 "io_qpairs": 1, 00:27:55.433 "current_admin_qpairs": 1, 00:27:55.433 "current_io_qpairs": 1, 00:27:55.433 "pending_bdev_io": 0, 00:27:55.433 "completed_nvme_io": 25643, 00:27:55.433 "transports": [ 00:27:55.433 { 00:27:55.433 "trtype": "TCP" 00:27:55.433 } 00:27:55.433 ] 00:27:55.433 }, 00:27:55.433 { 00:27:55.433 "name": "nvmf_tgt_poll_group_001", 00:27:55.433 "admin_qpairs": 0, 00:27:55.433 "io_qpairs": 1, 00:27:55.433 "current_admin_qpairs": 0, 00:27:55.433 "current_io_qpairs": 1, 00:27:55.433 "pending_bdev_io": 0, 00:27:55.433 "completed_nvme_io": 25217, 00:27:55.433 "transports": [ 00:27:55.433 { 00:27:55.433 "trtype": "TCP" 00:27:55.433 } 00:27:55.433 ] 00:27:55.433 }, 00:27:55.433 { 00:27:55.433 "name": "nvmf_tgt_poll_group_002", 00:27:55.433 "admin_qpairs": 0, 00:27:55.433 "io_qpairs": 1, 00:27:55.433 "current_admin_qpairs": 0, 00:27:55.433 "current_io_qpairs": 1, 00:27:55.433 "pending_bdev_io": 0, 00:27:55.433 "completed_nvme_io": 26249, 00:27:55.433 "transports": [ 00:27:55.433 { 00:27:55.433 "trtype": "TCP" 00:27:55.433 } 00:27:55.433 ] 00:27:55.433 }, 00:27:55.433 { 00:27:55.433 "name": "nvmf_tgt_poll_group_003", 00:27:55.433 "admin_qpairs": 0, 00:27:55.433 "io_qpairs": 1, 00:27:55.433 "current_admin_qpairs": 0, 00:27:55.433 "current_io_qpairs": 1, 00:27:55.433 "pending_bdev_io": 0, 00:27:55.433 "completed_nvme_io": 21040, 00:27:55.433 "transports": [ 00:27:55.433 { 00:27:55.433 "trtype": "TCP" 00:27:55.433 } 00:27:55.433 ] 00:27:55.433 } 00:27:55.433 ] 00:27:55.433 }' 00:27:55.433 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:55.433 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:55.433 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:55.433 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:55.434 16:54:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2358476 00:28:03.576 Initializing NVMe Controllers 00:28:03.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:03.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:03.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:03.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:03.576 Initialization complete. Launching workers. 00:28:03.576 ======================================================== 00:28:03.576 Latency(us) 00:28:03.576 Device Information : IOPS MiB/s Average min max 00:28:03.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13706.70 53.54 4668.71 1152.56 9130.45 00:28:03.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14020.40 54.77 4564.24 1081.16 9713.67 00:28:03.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14062.10 54.93 4551.10 1148.89 9961.90 00:28:03.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14032.20 54.81 4561.62 1036.64 7798.09 00:28:03.576 ======================================================== 00:28:03.576 Total : 55821.40 218.05 4585.92 1036.64 9961.90 00:28:03.576 00:28:03.576 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:03.576 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.576 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:03.576 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.577 rmmod nvme_tcp 00:28:03.577 rmmod nvme_fabrics 00:28:03.577 rmmod nvme_keyring 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2358447 ']' 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2358447 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2358447 ']' 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2358447 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2358447 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2358447' 00:28:03.577 killing process with pid 2358447 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2358447 00:28:03.577 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2358447 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.838 16:54:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.747 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:05.747 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:05.747 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:05.747 16:54:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:07.654 16:54:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:09.560 16:54:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.841 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:14.842 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:14.842 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:14.842 Found net devices under 0000:31:00.0: cvl_0_0 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:14.842 Found net devices under 0000:31:00.1: cvl_0_1 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.842 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.843 16:55:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:28:14.843 00:28:14.843 --- 10.0.0.2 ping statistics --- 00:28:14.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.843 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:28:14.843 00:28:14.843 --- 10.0.0.1 ping statistics --- 00:28:14.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.843 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:14.843 net.core.busy_poll = 1 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:14.843 net.core.busy_read = 1 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2363571 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2363571 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2363571 ']' 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:14.843 16:55:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.843 [2024-12-06 16:55:03.346186] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:14.843 [2024-12-06 16:55:03.346237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.843 [2024-12-06 16:55:03.433193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.843 [2024-12-06 16:55:03.457814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.843 [2024-12-06 16:55:03.457862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.843 [2024-12-06 16:55:03.457871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.843 [2024-12-06 16:55:03.457878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.843 [2024-12-06 16:55:03.457884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.843 [2024-12-06 16:55:03.460037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.843 [2024-12-06 16:55:03.460196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.843 [2024-12-06 16:55:03.460516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.843 [2024-12-06 16:55:03.460519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 [2024-12-06 16:55:04.253323] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.784 Malloc1 00:28:15.784 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:15.785 [2024-12-06 16:55:04.305817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2363779 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:15.785 16:55:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:17.688 "tick_rate": 2400000000, 00:28:17.688 "poll_groups": [ 00:28:17.688 { 00:28:17.688 "name": "nvmf_tgt_poll_group_000", 00:28:17.688 "admin_qpairs": 1, 00:28:17.688 "io_qpairs": 4, 00:28:17.688 "current_admin_qpairs": 1, 00:28:17.688 "current_io_qpairs": 4, 00:28:17.688 "pending_bdev_io": 0, 00:28:17.688 "completed_nvme_io": 49155, 00:28:17.688 "transports": [ 00:28:17.688 { 00:28:17.688 "trtype": "TCP" 00:28:17.688 } 00:28:17.688 ] 00:28:17.688 }, 00:28:17.688 { 00:28:17.688 "name": "nvmf_tgt_poll_group_001", 00:28:17.688 "admin_qpairs": 0, 00:28:17.688 "io_qpairs": 0, 00:28:17.688 "current_admin_qpairs": 0, 00:28:17.688 "current_io_qpairs": 0, 00:28:17.688 "pending_bdev_io": 0, 00:28:17.688 "completed_nvme_io": 0, 00:28:17.688 "transports": [ 00:28:17.688 { 00:28:17.688 "trtype": "TCP" 00:28:17.688 } 00:28:17.688 ] 00:28:17.688 }, 00:28:17.688 { 00:28:17.688 "name": "nvmf_tgt_poll_group_002", 00:28:17.688 "admin_qpairs": 0, 00:28:17.688 "io_qpairs": 0, 00:28:17.688 "current_admin_qpairs": 0, 00:28:17.688 "current_io_qpairs": 0, 00:28:17.688 "pending_bdev_io": 0, 00:28:17.688 "completed_nvme_io": 0, 00:28:17.688 "transports": [ 00:28:17.688 { 00:28:17.688 "trtype": "TCP" 00:28:17.688 } 00:28:17.688 ] 00:28:17.688 }, 00:28:17.688 { 00:28:17.688 "name": "nvmf_tgt_poll_group_003", 00:28:17.688 "admin_qpairs": 0, 00:28:17.688 "io_qpairs": 0, 00:28:17.688 "current_admin_qpairs": 0, 00:28:17.688 "current_io_qpairs": 0, 00:28:17.688 "pending_bdev_io": 0, 00:28:17.688 "completed_nvme_io": 0, 00:28:17.688 "transports": [ 00:28:17.688 { 00:28:17.688 "trtype": "TCP" 00:28:17.688 } 00:28:17.688 ] 00:28:17.688 } 00:28:17.688 ] 00:28:17.688 }' 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:17.688 16:55:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2363779 00:28:25.814 Initializing NVMe Controllers 00:28:25.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:25.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:25.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:25.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:25.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:25.814 Initialization complete. Launching workers. 00:28:25.814 ======================================================== 00:28:25.814 Latency(us) 00:28:25.814 Device Information : IOPS MiB/s Average min max 00:28:25.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6988.20 27.30 9161.99 1085.90 53804.66 00:28:25.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6868.70 26.83 9338.20 1246.63 54195.31 00:28:25.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5559.00 21.71 11545.85 1128.32 55904.99 00:28:25.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6781.60 26.49 9480.49 1054.71 54669.72 00:28:25.814 ======================================================== 00:28:25.814 Total : 26197.50 102.33 9796.48 1054.71 55904.99 00:28:25.814 00:28:25.814 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:25.814 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:25.814 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:25.814 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:25.814 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:25.814 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:25.814 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:25.814 rmmod nvme_tcp 00:28:26.074 rmmod nvme_fabrics 00:28:26.074 rmmod nvme_keyring 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2363571 ']' 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2363571 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2363571 ']' 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2363571 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.074 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2363571 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2363571' 00:28:26.075 killing process with pid 2363571 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2363571 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2363571 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.075 16:55:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:28.609 00:28:28.609 real 0m49.258s 00:28:28.609 user 2m46.137s 00:28:28.609 sys 0m8.603s 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.609 ************************************ 00:28:28.609 END TEST nvmf_perf_adq 00:28:28.609 ************************************ 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:28.609 ************************************ 00:28:28.609 START TEST nvmf_shutdown 00:28:28.609 ************************************ 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:28.609 * Looking for test storage... 00:28:28.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.609 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:28.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.610 --rc genhtml_branch_coverage=1 00:28:28.610 --rc genhtml_function_coverage=1 00:28:28.610 --rc genhtml_legend=1 00:28:28.610 --rc geninfo_all_blocks=1 00:28:28.610 --rc geninfo_unexecuted_blocks=1 00:28:28.610 00:28:28.610 ' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:28.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.610 --rc genhtml_branch_coverage=1 00:28:28.610 --rc genhtml_function_coverage=1 00:28:28.610 --rc genhtml_legend=1 00:28:28.610 --rc geninfo_all_blocks=1 00:28:28.610 --rc geninfo_unexecuted_blocks=1 00:28:28.610 00:28:28.610 ' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:28.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.610 --rc genhtml_branch_coverage=1 00:28:28.610 --rc genhtml_function_coverage=1 00:28:28.610 --rc genhtml_legend=1 00:28:28.610 --rc geninfo_all_blocks=1 00:28:28.610 --rc geninfo_unexecuted_blocks=1 00:28:28.610 00:28:28.610 ' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:28.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.610 --rc genhtml_branch_coverage=1 00:28:28.610 --rc genhtml_function_coverage=1 00:28:28.610 --rc genhtml_legend=1 00:28:28.610 --rc geninfo_all_blocks=1 00:28:28.610 --rc geninfo_unexecuted_blocks=1 00:28:28.610 00:28:28.610 ' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:28.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:28.610 ************************************ 00:28:28.610 START TEST nvmf_shutdown_tc1 00:28:28.610 ************************************ 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.610 16:55:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:33.890 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:33.890 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.890 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:33.891 Found net devices under 0000:31:00.0: cvl_0_0 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:33.891 Found net devices under 0000:31:00.1: cvl_0_1 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.891 16:55:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:28:33.891 00:28:33.891 --- 10.0.0.2 ping statistics --- 00:28:33.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.891 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:28:33.891 00:28:33.891 --- 10.0.0.1 ping statistics --- 00:28:33.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.891 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2370395 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2370395 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2370395 ']' 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.891 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:33.892 [2024-12-06 16:55:22.280027] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:33.892 [2024-12-06 16:55:22.280079] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.892 [2024-12-06 16:55:22.350915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.892 [2024-12-06 16:55:22.367159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.892 [2024-12-06 16:55:22.367187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.892 [2024-12-06 16:55:22.367193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.892 [2024-12-06 16:55:22.367198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.892 [2024-12-06 16:55:22.367202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.892 [2024-12-06 16:55:22.368478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.892 [2024-12-06 16:55:22.368637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.892 [2024-12-06 16:55:22.368803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:33.892 [2024-12-06 16:55:22.368806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.892 [2024-12-06 16:55:22.467001] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.892 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:33.892 Malloc1 00:28:33.892 [2024-12-06 16:55:22.552970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.892 Malloc2 00:28:34.152 Malloc3 00:28:34.152 Malloc4 00:28:34.152 Malloc5 00:28:34.152 Malloc6 00:28:34.152 Malloc7 00:28:34.152 Malloc8 00:28:34.412 Malloc9 00:28:34.412 Malloc10 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2370509 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2370509 /var/tmp/bdevperf.sock 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2370509 ']' 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:34.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.412 { 00:28:34.412 "params": { 00:28:34.412 "name": "Nvme$subsystem", 00:28:34.412 "trtype": "$TEST_TRANSPORT", 00:28:34.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.412 "adrfam": "ipv4", 00:28:34.412 "trsvcid": "$NVMF_PORT", 00:28:34.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.412 "hdgst": ${hdgst:-false}, 00:28:34.412 "ddgst": ${ddgst:-false} 00:28:34.412 }, 00:28:34.412 "method": "bdev_nvme_attach_controller" 00:28:34.412 } 00:28:34.412 EOF 00:28:34.412 )") 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.412 { 00:28:34.412 "params": { 00:28:34.412 "name": "Nvme$subsystem", 00:28:34.412 "trtype": "$TEST_TRANSPORT", 00:28:34.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.412 "adrfam": "ipv4", 00:28:34.412 "trsvcid": "$NVMF_PORT", 00:28:34.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.412 "hdgst": ${hdgst:-false}, 00:28:34.412 "ddgst": ${ddgst:-false} 00:28:34.412 }, 00:28:34.412 "method": "bdev_nvme_attach_controller" 00:28:34.412 } 00:28:34.412 EOF 00:28:34.412 )") 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.412 { 00:28:34.412 "params": { 00:28:34.412 "name": "Nvme$subsystem", 00:28:34.412 "trtype": "$TEST_TRANSPORT", 00:28:34.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.412 "adrfam": "ipv4", 00:28:34.412 "trsvcid": "$NVMF_PORT", 00:28:34.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.412 "hdgst": ${hdgst:-false}, 00:28:34.412 "ddgst": ${ddgst:-false} 00:28:34.412 }, 00:28:34.412 "method": "bdev_nvme_attach_controller" 00:28:34.412 } 00:28:34.412 EOF 00:28:34.412 )") 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.412 { 00:28:34.412 "params": { 00:28:34.412 "name": "Nvme$subsystem", 00:28:34.412 "trtype": "$TEST_TRANSPORT", 00:28:34.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.412 "adrfam": "ipv4", 00:28:34.412 "trsvcid": "$NVMF_PORT", 00:28:34.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.412 "hdgst": ${hdgst:-false}, 00:28:34.412 "ddgst": ${ddgst:-false} 00:28:34.412 }, 00:28:34.412 "method": "bdev_nvme_attach_controller" 00:28:34.412 } 00:28:34.412 EOF 00:28:34.412 )") 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.412 { 00:28:34.412 "params": { 00:28:34.412 "name": "Nvme$subsystem", 00:28:34.412 "trtype": "$TEST_TRANSPORT", 00:28:34.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.412 "adrfam": "ipv4", 00:28:34.412 "trsvcid": "$NVMF_PORT", 00:28:34.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.412 "hdgst": ${hdgst:-false}, 00:28:34.412 "ddgst": ${ddgst:-false} 00:28:34.412 }, 00:28:34.412 "method": "bdev_nvme_attach_controller" 00:28:34.412 } 00:28:34.412 EOF 00:28:34.412 )") 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.412 { 00:28:34.412 "params": { 00:28:34.412 "name": "Nvme$subsystem", 00:28:34.412 "trtype": "$TEST_TRANSPORT", 00:28:34.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.412 "adrfam": "ipv4", 00:28:34.412 "trsvcid": "$NVMF_PORT", 00:28:34.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.412 "hdgst": ${hdgst:-false}, 00:28:34.412 "ddgst": ${ddgst:-false} 00:28:34.412 }, 00:28:34.412 "method": "bdev_nvme_attach_controller" 00:28:34.412 } 00:28:34.412 EOF 00:28:34.412 )") 00:28:34.412 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.412 [2024-12-06 16:55:22.962362] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:34.413 [2024-12-06 16:55:22.962415] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.413 { 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme$subsystem", 00:28:34.413 "trtype": "$TEST_TRANSPORT", 00:28:34.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "$NVMF_PORT", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.413 "hdgst": ${hdgst:-false}, 00:28:34.413 "ddgst": ${ddgst:-false} 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 } 00:28:34.413 EOF 00:28:34.413 )") 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.413 { 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme$subsystem", 00:28:34.413 "trtype": "$TEST_TRANSPORT", 00:28:34.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "$NVMF_PORT", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.413 "hdgst": ${hdgst:-false}, 00:28:34.413 "ddgst": ${ddgst:-false} 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 } 00:28:34.413 EOF 00:28:34.413 )") 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.413 { 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme$subsystem", 00:28:34.413 "trtype": "$TEST_TRANSPORT", 00:28:34.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "$NVMF_PORT", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.413 "hdgst": ${hdgst:-false}, 00:28:34.413 "ddgst": ${ddgst:-false} 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 } 00:28:34.413 EOF 00:28:34.413 )") 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.413 { 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme$subsystem", 00:28:34.413 "trtype": "$TEST_TRANSPORT", 00:28:34.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "$NVMF_PORT", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.413 "hdgst": ${hdgst:-false}, 00:28:34.413 "ddgst": ${ddgst:-false} 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 } 00:28:34.413 EOF 00:28:34.413 )") 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:34.413 16:55:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme1", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme2", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme3", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme4", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme5", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme6", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme7", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme8", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme9", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 },{ 00:28:34.413 "params": { 00:28:34.413 "name": "Nvme10", 00:28:34.413 "trtype": "tcp", 00:28:34.413 "traddr": "10.0.0.2", 00:28:34.413 "adrfam": "ipv4", 00:28:34.413 "trsvcid": "4420", 00:28:34.413 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:34.413 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:34.413 "hdgst": false, 00:28:34.413 "ddgst": false 00:28:34.413 }, 00:28:34.413 "method": "bdev_nvme_attach_controller" 00:28:34.413 }' 00:28:34.413 [2024-12-06 16:55:23.040557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.413 [2024-12-06 16:55:23.059063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2370509 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:36.456 16:55:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:37.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2370509 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2370395 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.402 "method": "bdev_nvme_attach_controller" 00:28:37.402 } 00:28:37.402 EOF 00:28:37.402 )") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.402 "method": "bdev_nvme_attach_controller" 00:28:37.402 } 00:28:37.402 EOF 00:28:37.402 )") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.402 "method": "bdev_nvme_attach_controller" 00:28:37.402 } 00:28:37.402 EOF 00:28:37.402 )") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.402 "method": "bdev_nvme_attach_controller" 00:28:37.402 } 00:28:37.402 EOF 00:28:37.402 )") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.402 "method": "bdev_nvme_attach_controller" 00:28:37.402 } 00:28:37.402 EOF 00:28:37.402 )") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.402 "method": "bdev_nvme_attach_controller" 00:28:37.402 } 00:28:37.402 EOF 00:28:37.402 )") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.402 "method": "bdev_nvme_attach_controller" 00:28:37.402 } 00:28:37.402 EOF 00:28:37.402 )") 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.402 [2024-12-06 16:55:25.798599] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:37.402 [2024-12-06 16:55:25.798654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2371145 ] 00:28:37.402 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.402 { 00:28:37.402 "params": { 00:28:37.402 "name": "Nvme$subsystem", 00:28:37.402 "trtype": "$TEST_TRANSPORT", 00:28:37.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.402 "adrfam": "ipv4", 00:28:37.402 "trsvcid": "$NVMF_PORT", 00:28:37.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.402 "hdgst": ${hdgst:-false}, 00:28:37.402 "ddgst": ${ddgst:-false} 00:28:37.402 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 } 00:28:37.403 EOF 00:28:37.403 )") 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.403 { 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme$subsystem", 00:28:37.403 "trtype": "$TEST_TRANSPORT", 00:28:37.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "$NVMF_PORT", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.403 "hdgst": ${hdgst:-false}, 00:28:37.403 "ddgst": ${ddgst:-false} 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 } 00:28:37.403 EOF 00:28:37.403 )") 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.403 { 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme$subsystem", 00:28:37.403 "trtype": "$TEST_TRANSPORT", 00:28:37.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "$NVMF_PORT", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.403 "hdgst": ${hdgst:-false}, 00:28:37.403 "ddgst": ${ddgst:-false} 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 } 00:28:37.403 EOF 00:28:37.403 )") 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:37.403 16:55:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme1", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme2", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme3", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme4", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme5", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme6", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme7", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme8", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme9", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 },{ 00:28:37.403 "params": { 00:28:37.403 "name": "Nvme10", 00:28:37.403 "trtype": "tcp", 00:28:37.403 "traddr": "10.0.0.2", 00:28:37.403 "adrfam": "ipv4", 00:28:37.403 "trsvcid": "4420", 00:28:37.403 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:37.403 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:37.403 "hdgst": false, 00:28:37.403 "ddgst": false 00:28:37.403 }, 00:28:37.403 "method": "bdev_nvme_attach_controller" 00:28:37.403 }' 00:28:37.403 [2024-12-06 16:55:25.878097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.403 [2024-12-06 16:55:25.896179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.780 Running I/O for 1 seconds... 00:28:39.719 2190.00 IOPS, 136.88 MiB/s 00:28:39.719 Latency(us) 00:28:39.719 [2024-12-06T15:55:28.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.719 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme1n1 : 1.13 283.14 17.70 0.00 0.00 222270.98 4532.91 248162.99 00:28:39.719 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme2n1 : 1.08 237.67 14.85 0.00 0.00 261697.71 15728.64 241172.48 00:28:39.719 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme3n1 : 1.10 293.41 18.34 0.00 0.00 207728.98 2812.59 228939.09 00:28:39.719 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme4n1 : 1.12 285.26 17.83 0.00 0.00 210462.04 23592.96 246415.36 00:28:39.719 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme5n1 : 1.12 284.86 17.80 0.00 0.00 206865.24 12670.29 239424.85 00:28:39.719 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme6n1 : 1.16 276.53 17.28 0.00 0.00 208894.08 5515.95 251658.24 00:28:39.719 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme7n1 : 1.14 281.73 17.61 0.00 0.00 201580.37 15837.87 248162.99 00:28:39.719 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme8n1 : 1.19 272.44 17.03 0.00 0.00 195941.84 5597.87 239424.85 00:28:39.719 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme9n1 : 1.19 327.85 20.49 0.00 0.00 167695.31 1706.67 222822.40 00:28:39.719 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:39.719 Verification LBA range: start 0x0 length 0x400 00:28:39.719 Nvme10n1 : 1.19 322.28 20.14 0.00 0.00 167575.75 5324.80 272629.76 00:28:39.719 [2024-12-06T15:55:28.412Z] =================================================================================================================== 00:28:39.719 [2024-12-06T15:55:28.412Z] Total : 2865.17 179.07 0.00 0.00 202436.16 1706.67 272629.76 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:39.719 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:39.719 rmmod nvme_tcp 00:28:39.978 rmmod nvme_fabrics 00:28:39.978 rmmod nvme_keyring 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2370395 ']' 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2370395 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2370395 ']' 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2370395 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370395 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:39.978 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370395' 00:28:39.978 killing process with pid 2370395 00:28:39.979 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2370395 00:28:39.979 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2370395 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.238 16:55:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.144 00:28:42.144 real 0m13.804s 00:28:42.144 user 0m30.797s 00:28:42.144 sys 0m4.866s 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.144 ************************************ 00:28:42.144 END TEST nvmf_shutdown_tc1 00:28:42.144 ************************************ 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:42.144 ************************************ 00:28:42.144 START TEST nvmf_shutdown_tc2 00:28:42.144 ************************************ 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.144 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:42.403 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:42.403 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:42.403 Found net devices under 0000:31:00.0: cvl_0_0 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:42.403 Found net devices under 0000:31:00.1: cvl_0_1 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.403 16:55:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:28:42.403 00:28:42.403 --- 10.0.0.2 ping statistics --- 00:28:42.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.403 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:28:42.403 00:28:42.403 --- 10.0.0.1 ping statistics --- 00:28:42.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.403 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:42.403 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.404 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.404 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.404 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.404 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.404 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.404 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.662 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:42.662 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.662 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.662 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.662 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2372572 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2372572 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2372572 ']' 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:42.663 [2024-12-06 16:55:31.147118] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:42.663 [2024-12-06 16:55:31.147166] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.663 [2024-12-06 16:55:31.218240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.663 [2024-12-06 16:55:31.234530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.663 [2024-12-06 16:55:31.234557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.663 [2024-12-06 16:55:31.234563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.663 [2024-12-06 16:55:31.234567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.663 [2024-12-06 16:55:31.234571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.663 [2024-12-06 16:55:31.235830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.663 [2024-12-06 16:55:31.235950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.663 [2024-12-06 16:55:31.236066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.663 [2024-12-06 16:55:31.236068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.663 [2024-12-06 16:55:31.334348] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.663 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.923 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.923 Malloc1 00:28:42.923 [2024-12-06 16:55:31.424061] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.923 Malloc2 00:28:42.923 Malloc3 00:28:42.923 Malloc4 00:28:42.923 Malloc5 00:28:42.923 Malloc6 00:28:43.182 Malloc7 00:28:43.182 Malloc8 00:28:43.182 Malloc9 00:28:43.182 Malloc10 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2372633 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2372633 /var/tmp/bdevperf.sock 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2372633 ']' 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:43.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.182 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.182 { 00:28:43.182 "params": { 00:28:43.182 "name": "Nvme$subsystem", 00:28:43.182 "trtype": "$TEST_TRANSPORT", 00:28:43.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.182 "adrfam": "ipv4", 00:28:43.182 "trsvcid": "$NVMF_PORT", 00:28:43.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.182 "hdgst": ${hdgst:-false}, 00:28:43.182 "ddgst": ${ddgst:-false} 00:28:43.182 }, 00:28:43.182 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 [2024-12-06 16:55:31.834024] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:43.183 [2024-12-06 16:55:31.834078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2372633 ] 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:43.183 { 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme$subsystem", 00:28:43.183 "trtype": "$TEST_TRANSPORT", 00:28:43.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "$NVMF_PORT", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.183 "hdgst": ${hdgst:-false}, 00:28:43.183 "ddgst": ${ddgst:-false} 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 } 00:28:43.183 EOF 00:28:43.183 )") 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:43.183 16:55:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme1", 00:28:43.183 "trtype": "tcp", 00:28:43.183 "traddr": "10.0.0.2", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "4420", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.183 "hdgst": false, 00:28:43.183 "ddgst": false 00:28:43.183 }, 00:28:43.183 "method": "bdev_nvme_attach_controller" 00:28:43.183 },{ 00:28:43.183 "params": { 00:28:43.183 "name": "Nvme2", 00:28:43.183 "trtype": "tcp", 00:28:43.183 "traddr": "10.0.0.2", 00:28:43.183 "adrfam": "ipv4", 00:28:43.183 "trsvcid": "4420", 00:28:43.183 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:43.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme3", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme4", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme5", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme6", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme7", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme8", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme9", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 },{ 00:28:43.184 "params": { 00:28:43.184 "name": "Nvme10", 00:28:43.184 "trtype": "tcp", 00:28:43.184 "traddr": "10.0.0.2", 00:28:43.184 "adrfam": "ipv4", 00:28:43.184 "trsvcid": "4420", 00:28:43.184 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:43.184 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:43.184 "hdgst": false, 00:28:43.184 "ddgst": false 00:28:43.184 }, 00:28:43.184 "method": "bdev_nvme_attach_controller" 00:28:43.184 }' 00:28:43.443 [2024-12-06 16:55:31.898755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.443 [2024-12-06 16:55:31.916491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.819 Running I/O for 10 seconds... 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:45.078 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2372633 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2372633 ']' 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2372633 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.337 16:55:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372633 00:28:45.597 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:45.597 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:45.597 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372633' 00:28:45.597 killing process with pid 2372633 00:28:45.597 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2372633 00:28:45.597 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2372633 00:28:45.597 Received shutdown signal, test time was about 0.828293 seconds 00:28:45.597 00:28:45.597 Latency(us) 00:28:45.597 [2024-12-06T15:55:34.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.597 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.597 Verification LBA range: start 0x0 length 0x400 00:28:45.597 Nvme1n1 : 0.78 327.77 20.49 0.00 0.00 193328.21 14308.69 178257.92 00:28:45.597 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.597 Verification LBA range: start 0x0 length 0x400 00:28:45.597 Nvme2n1 : 0.78 328.56 20.53 0.00 0.00 189417.60 18022.40 172141.23 00:28:45.598 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme3n1 : 0.78 408.21 25.51 0.00 0.00 149919.23 11359.57 178257.92 00:28:45.598 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme4n1 : 0.78 329.88 20.62 0.00 0.00 182254.51 15728.64 177384.11 00:28:45.598 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme5n1 : 0.77 333.36 20.84 0.00 0.00 176023.89 14854.83 173015.04 00:28:45.598 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme6n1 : 0.79 323.39 20.21 0.00 0.00 179534.08 16165.55 189617.49 00:28:45.598 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme7n1 : 0.77 333.00 20.81 0.00 0.00 170373.87 13216.43 173888.85 00:28:45.598 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme8n1 : 0.77 359.37 22.46 0.00 0.00 153105.34 7482.03 174762.67 00:28:45.598 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme9n1 : 0.79 330.11 20.63 0.00 0.00 165880.02 14527.15 194860.37 00:28:45.598 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:45.598 Verification LBA range: start 0x0 length 0x400 00:28:45.598 Nvme10n1 : 0.83 314.13 19.63 0.00 0.00 163393.12 13926.40 164276.91 00:28:45.598 [2024-12-06T15:55:34.291Z] =================================================================================================================== 00:28:45.598 [2024-12-06T15:55:34.291Z] Total : 3387.77 211.74 0.00 0.00 171600.26 7482.03 194860.37 00:28:45.598 16:55:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2372572 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.973 rmmod nvme_tcp 00:28:46.973 rmmod nvme_fabrics 00:28:46.973 rmmod nvme_keyring 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2372572 ']' 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2372572 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2372572 ']' 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2372572 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2372572 00:28:46.973 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2372572' 00:28:46.974 killing process with pid 2372572 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2372572 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2372572 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.974 16:55:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.510 00:28:49.510 real 0m6.777s 00:28:49.510 user 0m19.562s 00:28:49.510 sys 0m0.988s 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:49.510 ************************************ 00:28:49.510 END TEST nvmf_shutdown_tc2 00:28:49.510 ************************************ 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:49.510 ************************************ 00:28:49.510 START TEST nvmf_shutdown_tc3 00:28:49.510 ************************************ 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:49.510 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:49.511 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:49.511 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:49.511 Found net devices under 0000:31:00.0: cvl_0_0 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:49.511 Found net devices under 0000:31:00.1: cvl_0_1 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:49.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:28:49.511 00:28:49.511 --- 10.0.0.2 ping statistics --- 00:28:49.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.511 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:49.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:28:49.511 00:28:49.511 --- 10.0.0.1 ping statistics --- 00:28:49.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.511 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2374081 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2374081 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2374081 ']' 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.511 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.512 16:55:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:49.512 [2024-12-06 16:55:37.975459] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:49.512 [2024-12-06 16:55:37.975522] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.512 [2024-12-06 16:55:38.054988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:49.512 [2024-12-06 16:55:38.075625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.512 [2024-12-06 16:55:38.075661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.512 [2024-12-06 16:55:38.075670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:49.512 [2024-12-06 16:55:38.075676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:49.512 [2024-12-06 16:55:38.075681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.512 [2024-12-06 16:55:38.077219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.512 [2024-12-06 16:55:38.077541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.512 [2024-12-06 16:55:38.077705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.512 [2024-12-06 16:55:38.077705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.512 [2024-12-06 16:55:38.180141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.512 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.772 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:49.772 Malloc1 00:28:49.772 [2024-12-06 16:55:38.262705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.772 Malloc2 00:28:49.772 Malloc3 00:28:49.772 Malloc4 00:28:49.772 Malloc5 00:28:49.772 Malloc6 00:28:50.033 Malloc7 00:28:50.033 Malloc8 00:28:50.033 Malloc9 00:28:50.033 Malloc10 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2374256 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2374256 /var/tmp/bdevperf.sock 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2374256 ']' 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:50.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:50.033 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 [2024-12-06 16:55:38.673822] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:50.034 [2024-12-06 16:55:38.673874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2374256 ] 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.034 "trsvcid": "$NVMF_PORT", 00:28:50.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.034 "hdgst": ${hdgst:-false}, 00:28:50.034 "ddgst": ${ddgst:-false} 00:28:50.034 }, 00:28:50.034 "method": "bdev_nvme_attach_controller" 00:28:50.034 } 00:28:50.034 EOF 00:28:50.034 )") 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.034 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.034 { 00:28:50.034 "params": { 00:28:50.034 "name": "Nvme$subsystem", 00:28:50.034 "trtype": "$TEST_TRANSPORT", 00:28:50.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.034 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "$NVMF_PORT", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.035 "hdgst": ${hdgst:-false}, 00:28:50.035 "ddgst": ${ddgst:-false} 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 } 00:28:50.035 EOF 00:28:50.035 )") 00:28:50.035 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:50.035 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:50.035 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:50.035 16:55:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme1", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme2", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme3", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme4", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme5", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme6", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme7", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme8", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme9", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 },{ 00:28:50.035 "params": { 00:28:50.035 "name": "Nvme10", 00:28:50.035 "trtype": "tcp", 00:28:50.035 "traddr": "10.0.0.2", 00:28:50.035 "adrfam": "ipv4", 00:28:50.035 "trsvcid": "4420", 00:28:50.035 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:50.035 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:50.035 "hdgst": false, 00:28:50.035 "ddgst": false 00:28:50.035 }, 00:28:50.035 "method": "bdev_nvme_attach_controller" 00:28:50.035 }' 00:28:50.296 [2024-12-06 16:55:38.739139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.296 [2024-12-06 16:55:38.755679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.676 Running I/O for 10 seconds... 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:51.937 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:52.202 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:52.202 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:52.202 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:52.202 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2374081 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2374081 ']' 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2374081 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2374081 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2374081' 00:28:52.203 killing process with pid 2374081 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2374081 00:28:52.203 16:55:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2374081 00:28:52.203 [2024-12-06 16:55:40.853737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.853995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:28:52.203 [2024-12-06 16:55:40.854357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.203 [2024-12-06 16:55:40.854389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.203 [2024-12-06 16:55:40.854398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.204 [2024-12-06 16:55:40.854404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.204 [2024-12-06 16:55:40.854410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.204 [2024-12-06 16:55:40.854416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.204 [2024-12-06 16:55:40.854422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.204 [2024-12-06 16:55:40.854427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.204 [2024-12-06 16:55:40.854433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6d80 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.855997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.856294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125a3f0 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.862355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257e60 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.204 [2024-12-06 16:55:40.863243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.863482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258330 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.205 [2024-12-06 16:55:40.864718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.864856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1258cf0 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.206 [2024-12-06 16:55:40.865786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.865791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.865796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.865800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.865805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.865810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.865814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.865819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259070 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.866678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259540 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.866804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de3a40 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.866904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e14c70 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.866983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.866989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.866995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5ef0 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-06 16:55:40.867071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:52.207 the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6910 is same [2024-12-06 16:55:40.867095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with with the state(6) to be set 00:28:52.207 the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-06 16:55:40.867123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:52.207 the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with [2024-12-06 16:55:40.867130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:28:52.207 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.207 [2024-12-06 16:55:40.867165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.207 [2024-12-06 16:55:40.867170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5610 is same [2024-12-06 16:55:40.867176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with with the state(6) to be set 00:28:52.207 the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.207 [2024-12-06 16:55:40.867198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.208 [2024-12-06 16:55:40.867201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.867207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.208 [2024-12-06 16:55:40.867219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.867225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.208 [2024-12-06 16:55:40.867231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.867236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.208 [2024-12-06 16:55:40.867242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.867247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b40 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6d80 (9): Bad file descriptor 00:28:52.208 [2024-12-06 16:55:40.867268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.208 [2024-12-06 16:55:40.867290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.867296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.208 [2024-12-06 16:55:40.867301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.867307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.208 [2024-12-06 16:55:40.867312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.867318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-06 16:55:40.867323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:52.208 the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-06 16:55:40.867331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0760 is same [2024-12-06 16:55:40.867338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with with the state(6) to be set 00:28:52.208 the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.867427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259a30 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.869506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1259f00 is same with the state(6) to be set 00:28:52.208 [2024-12-06 16:55:40.881826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.208 [2024-12-06 16:55:40.881960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.208 [2024-12-06 16:55:40.881966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.881972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.881978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.881984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.881990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.881996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.209 [2024-12-06 16:55:40.882439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.209 [2024-12-06 16:55:40.882446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.882991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.882998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.210 [2024-12-06 16:55:40.883111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.210 [2024-12-06 16:55:40.883116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.211 [2024-12-06 16:55:40.883529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.211 [2024-12-06 16:55:40.883534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.883546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.883558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.883570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.883581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de3a40 (9): Bad file descriptor 00:28:52.212 [2024-12-06 16:55:40.883881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddb440 is same with the state(6) to be set 00:28:52.212 [2024-12-06 16:55:40.883945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e14c70 (9): Bad file descriptor 00:28:52.212 [2024-12-06 16:55:40.883961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.883994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:52.212 [2024-12-06 16:55:40.883999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5310 is same with the state(6) to be set 00:28:52.212 [2024-12-06 16:55:40.884018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a5ef0 (9): Bad file descriptor 00:28:52.212 [2024-12-06 16:55:40.884030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6910 (9): Bad file descriptor 00:28:52.212 [2024-12-06 16:55:40.884041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d5610 (9): Bad file descriptor 00:28:52.212 [2024-12-06 16:55:40.884050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2b40 (9): Bad file descriptor 00:28:52.212 [2024-12-06 16:55:40.884068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a0760 (9): Bad file descriptor 00:28:52.212 [2024-12-06 16:55:40.884110] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:52.212 [2024-12-06 16:55:40.884138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.212 [2024-12-06 16:55:40.884344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.212 [2024-12-06 16:55:40.884350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.481 [2024-12-06 16:55:40.890632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.481 [2024-12-06 16:55:40.890662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.890990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.890996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.891258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.891265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aa430 is same with the state(6) to be set 00:28:52.482 [2024-12-06 16:55:40.893154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.893172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.893183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.893189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.893197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.893204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.893214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.482 [2024-12-06 16:55:40.893220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.482 [2024-12-06 16:55:40.893228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.483 [2024-12-06 16:55:40.893728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.483 [2024-12-06 16:55:40.893734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.893981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.893987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:52.484 [2024-12-06 16:55:40.894171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddb440 (9): Bad file descriptor 00:28:52.484 [2024-12-06 16:55:40.894189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a5310 (9): Bad file descriptor 00:28:52.484 [2024-12-06 16:55:40.894208] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:52.484 [2024-12-06 16:55:40.894217] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:28:52.484 [2024-12-06 16:55:40.894232] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:52.484 [2024-12-06 16:55:40.894272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.484 [2024-12-06 16:55:40.894488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.484 [2024-12-06 16:55:40.894493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.485 [2024-12-06 16:55:40.894989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.485 [2024-12-06 16:55:40.894996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.895093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.895098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a9380 is same with the state(6) to be set 00:28:52.486 [2024-12-06 16:55:40.898009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.486 [2024-12-06 16:55:40.898030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a0760 with addr=10.0.0.2, port=4420 00:28:52.486 [2024-12-06 16:55:40.898037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0760 is same with the state(6) to be set 00:28:52.486 [2024-12-06 16:55:40.898308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.486 [2024-12-06 16:55:40.898611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.486 [2024-12-06 16:55:40.898616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.898992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.487 [2024-12-06 16:55:40.899125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.487 [2024-12-06 16:55:40.899131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900029] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:52.488 [2024-12-06 16:55:40.900295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.488 [2024-12-06 16:55:40.900792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.488 [2024-12-06 16:55:40.900800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.900988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.900994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.901114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.901985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:52.489 [2024-12-06 16:55:40.901997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:52.489 [2024-12-06 16:55:40.902005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:52.489 [2024-12-06 16:55:40.902032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a0760 (9): Bad file descriptor 00:28:52.489 [2024-12-06 16:55:40.902439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.489 [2024-12-06 16:55:40.902601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.489 [2024-12-06 16:55:40.902607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.902987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.902995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.903000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.903007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.903013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.903020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.490 [2024-12-06 16:55:40.903025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.490 [2024-12-06 16:55:40.903031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.903232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.903238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.491 [2024-12-06 16:55:40.904434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.491 [2024-12-06 16:55:40.904440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.492 [2024-12-06 16:55:40.904933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.492 [2024-12-06 16:55:40.904940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.904946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.904952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.904957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.904963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2859250 is same with the state(6) to be set 00:28:52.493 [2024-12-06 16:55:40.905854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.905986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.905995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.493 [2024-12-06 16:55:40.906374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.493 [2024-12-06 16:55:40.906378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.494 [2024-12-06 16:55:40.906708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:52.494 [2024-12-06 16:55:40.906714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2cf4690 is same with the state(6) to be set 00:28:52.494 [2024-12-06 16:55:40.907188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:52.494 [2024-12-06 16:55:40.907208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:52.494 [2024-12-06 16:55:40.907217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:52.494 [2024-12-06 16:55:40.907226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:52.494 [2024-12-06 16:55:40.907605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.494 [2024-12-06 16:55:40.907616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a6d80 with addr=10.0.0.2, port=4420 00:28:52.494 [2024-12-06 16:55:40.907623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6d80 is same with the state(6) to be set 00:28:52.494 [2024-12-06 16:55:40.907941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.494 [2024-12-06 16:55:40.907948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a6910 with addr=10.0.0.2, port=4420 00:28:52.494 [2024-12-06 16:55:40.907953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6910 is same with the state(6) to be set 00:28:52.494 [2024-12-06 16:55:40.908406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.494 [2024-12-06 16:55:40.908439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d5610 with addr=10.0.0.2, port=4420 00:28:52.494 [2024-12-06 16:55:40.908449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5610 is same with the state(6) to be set 00:28:52.494 [2024-12-06 16:55:40.908459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:52.494 [2024-12-06 16:55:40.908465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:52.494 [2024-12-06 16:55:40.908473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:52.494 [2024-12-06 16:55:40.908481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:52.494 [2024-12-06 16:55:40.908493] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:52.494 [2024-12-06 16:55:40.908510] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:28:52.494 [2024-12-06 16:55:40.909786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:52.494 task offset: 32768 on job bdev=Nvme4n1 fails 00:28:52.494 00:28:52.494 Latency(us) 00:28:52.494 [2024-12-06T15:55:41.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.494 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.494 Job: Nvme1n1 ended in about 0.77 seconds with error 00:28:52.494 Verification LBA range: start 0x0 length 0x400 00:28:52.494 Nvme1n1 : 0.77 249.08 15.57 83.03 0.00 190862.51 15728.64 176510.29 00:28:52.494 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.494 Job: Nvme2n1 ended in about 0.77 seconds with error 00:28:52.494 Verification LBA range: start 0x0 length 0x400 00:28:52.494 Nvme2n1 : 0.77 248.81 15.55 82.94 0.00 187656.96 14417.92 206219.95 00:28:52.494 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.494 Job: Nvme3n1 ended in about 0.78 seconds with error 00:28:52.494 Verification LBA range: start 0x0 length 0x400 00:28:52.494 Nvme3n1 : 0.78 246.48 15.41 82.16 0.00 186142.72 14090.24 179131.73 00:28:52.494 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.495 Job: Nvme4n1 ended in about 0.77 seconds with error 00:28:52.495 Verification LBA range: start 0x0 length 0x400 00:28:52.495 Nvme4n1 : 0.77 333.72 20.86 83.43 0.00 143830.78 12288.00 171267.41 00:28:52.495 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.495 Job: Nvme5n1 ended in about 0.77 seconds with error 00:28:52.495 Verification LBA range: start 0x0 length 0x400 00:28:52.495 Nvme5n1 : 0.77 247.78 15.49 82.59 0.00 178398.51 13817.17 187869.87 00:28:52.495 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.495 Job: Nvme6n1 ended in about 0.78 seconds with error 00:28:52.495 Verification LBA range: start 0x0 length 0x400 00:28:52.495 Nvme6n1 : 0.78 256.19 16.01 81.98 0.00 171136.72 6881.28 177384.11 00:28:52.495 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.495 Job: Nvme7n1 ended in about 0.77 seconds with error 00:28:52.495 Verification LBA range: start 0x0 length 0x400 00:28:52.495 Nvme7n1 : 0.77 250.00 15.62 83.33 0.00 169938.56 15400.96 191365.12 00:28:52.495 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.495 Job: Nvme8n1 ended in about 0.78 seconds with error 00:28:52.495 Verification LBA range: start 0x0 length 0x400 00:28:52.495 Nvme8n1 : 0.78 244.75 15.30 81.58 0.00 170764.91 13653.33 175636.48 00:28:52.495 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.495 Job: Nvme9n1 ended in about 0.77 seconds with error 00:28:52.495 Verification LBA range: start 0x0 length 0x400 00:28:52.495 Nvme9n1 : 0.77 248.49 15.53 82.83 0.00 164556.48 13325.65 180879.36 00:28:52.495 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:52.495 Job: Nvme10n1 ended in about 0.78 seconds with error 00:28:52.495 Verification LBA range: start 0x0 length 0x400 00:28:52.495 Nvme10n1 : 0.78 247.15 15.45 82.38 0.00 162296.53 17148.59 177384.11 00:28:52.495 [2024-12-06T15:55:41.188Z] =================================================================================================================== 00:28:52.495 [2024-12-06T15:55:41.188Z] Total : 2572.45 160.78 826.26 0.00 171855.60 6881.28 206219.95 00:28:52.495 [2024-12-06 16:55:40.930493] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:52.495 [2024-12-06 16:55:40.930910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.495 [2024-12-06 16:55:40.930926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de3a40 with addr=10.0.0.2, port=4420 00:28:52.495 [2024-12-06 16:55:40.930935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de3a40 is same with the state(6) to be set 00:28:52.495 [2024-12-06 16:55:40.931253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.495 [2024-12-06 16:55:40.931263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a5310 with addr=10.0.0.2, port=4420 00:28:52.495 [2024-12-06 16:55:40.931268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5310 is same with the state(6) to be set 00:28:52.495 [2024-12-06 16:55:40.931476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.495 [2024-12-06 16:55:40.931484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e14c70 with addr=10.0.0.2, port=4420 00:28:52.495 [2024-12-06 16:55:40.931490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e14c70 is same with the state(6) to be set 00:28:52.495 [2024-12-06 16:55:40.931699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.495 [2024-12-06 16:55:40.931706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a5ef0 with addr=10.0.0.2, port=4420 00:28:52.495 [2024-12-06 16:55:40.931717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a5ef0 is same with the state(6) to be set 00:28:52.495 [2024-12-06 16:55:40.931727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6d80 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.931737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6910 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.931746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d5610 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.932351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:52.495 [2024-12-06 16:55:40.932376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:52.495 [2024-12-06 16:55:40.932714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.495 [2024-12-06 16:55:40.932725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1de2b40 with addr=10.0.0.2, port=4420 00:28:52.495 [2024-12-06 16:55:40.932732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1de2b40 is same with the state(6) to be set 00:28:52.495 [2024-12-06 16:55:40.932739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de3a40 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.932746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a5310 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.932754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e14c70 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.932763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a5ef0 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.932770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:52.495 [2024-12-06 16:55:40.932775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:52.495 [2024-12-06 16:55:40.932782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:52.495 [2024-12-06 16:55:40.932790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:52.495 [2024-12-06 16:55:40.932796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:52.495 [2024-12-06 16:55:40.932801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:52.495 [2024-12-06 16:55:40.932805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:52.495 [2024-12-06 16:55:40.932810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:52.495 [2024-12-06 16:55:40.932816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:52.495 [2024-12-06 16:55:40.932822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:52.495 [2024-12-06 16:55:40.932827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:52.495 [2024-12-06 16:55:40.932831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:52.495 [2024-12-06 16:55:40.932865] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:28:52.495 [2024-12-06 16:55:40.932874] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:52.495 [2024-12-06 16:55:40.932882] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:52.495 [2024-12-06 16:55:40.932893] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:28:52.495 [2024-12-06 16:55:40.933347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.495 [2024-12-06 16:55:40.933360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a0760 with addr=10.0.0.2, port=4420 00:28:52.495 [2024-12-06 16:55:40.933365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a0760 is same with the state(6) to be set 00:28:52.495 [2024-12-06 16:55:40.933679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.495 [2024-12-06 16:55:40.933687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddb440 with addr=10.0.0.2, port=4420 00:28:52.495 [2024-12-06 16:55:40.933693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddb440 is same with the state(6) to be set 00:28:52.495 [2024-12-06 16:55:40.933701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de2b40 (9): Bad file descriptor 00:28:52.495 [2024-12-06 16:55:40.933707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:52.495 [2024-12-06 16:55:40.933712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:52.495 [2024-12-06 16:55:40.933719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:52.495 [2024-12-06 16:55:40.933724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:52.495 [2024-12-06 16:55:40.933732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:52.495 [2024-12-06 16:55:40.933736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:52.495 [2024-12-06 16:55:40.933741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:52.495 [2024-12-06 16:55:40.933745] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:52.495 [2024-12-06 16:55:40.933751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:52.495 [2024-12-06 16:55:40.933757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:52.495 [2024-12-06 16:55:40.933763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:52.495 [2024-12-06 16:55:40.933768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:52.495 [2024-12-06 16:55:40.933773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:52.496 [2024-12-06 16:55:40.933777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:52.496 [2024-12-06 16:55:40.933783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:52.496 [2024-12-06 16:55:40.933788] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:52.496 [2024-12-06 16:55:40.933830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:52.496 [2024-12-06 16:55:40.933838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:52.496 [2024-12-06 16:55:40.933845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:52.496 [2024-12-06 16:55:40.933865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a0760 (9): Bad file descriptor 00:28:52.496 [2024-12-06 16:55:40.933873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddb440 (9): Bad file descriptor 00:28:52.496 [2024-12-06 16:55:40.933882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:52.496 [2024-12-06 16:55:40.933888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:52.496 [2024-12-06 16:55:40.933894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:52.496 [2024-12-06 16:55:40.933899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:52.496 [2024-12-06 16:55:40.934254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.496 [2024-12-06 16:55:40.934263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18d5610 with addr=10.0.0.2, port=4420 00:28:52.496 [2024-12-06 16:55:40.934268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d5610 is same with the state(6) to be set 00:28:52.496 [2024-12-06 16:55:40.934461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.496 [2024-12-06 16:55:40.934468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a6910 with addr=10.0.0.2, port=4420 00:28:52.496 [2024-12-06 16:55:40.934474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6910 is same with the state(6) to be set 00:28:52.496 [2024-12-06 16:55:40.934774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.496 [2024-12-06 16:55:40.934782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a6d80 with addr=10.0.0.2, port=4420 00:28:52.496 [2024-12-06 16:55:40.934788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6d80 is same with the state(6) to be set 00:28:52.496 [2024-12-06 16:55:40.934794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:52.496 [2024-12-06 16:55:40.934798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:52.496 [2024-12-06 16:55:40.934803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:52.496 [2024-12-06 16:55:40.934807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:52.496 [2024-12-06 16:55:40.934813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:52.496 [2024-12-06 16:55:40.934819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:52.496 [2024-12-06 16:55:40.934824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:52.496 [2024-12-06 16:55:40.934829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:52.496 [2024-12-06 16:55:40.934849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d5610 (9): Bad file descriptor 00:28:52.496 [2024-12-06 16:55:40.934855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6910 (9): Bad file descriptor 00:28:52.496 [2024-12-06 16:55:40.934863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a6d80 (9): Bad file descriptor 00:28:52.496 [2024-12-06 16:55:40.934883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:52.496 [2024-12-06 16:55:40.934889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:52.496 [2024-12-06 16:55:40.934895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:52.496 [2024-12-06 16:55:40.934900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:52.496 [2024-12-06 16:55:40.934906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:52.496 [2024-12-06 16:55:40.934912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:52.496 [2024-12-06 16:55:40.934917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:52.496 [2024-12-06 16:55:40.934922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:52.496 [2024-12-06 16:55:40.934928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:52.496 [2024-12-06 16:55:40.934933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:52.496 [2024-12-06 16:55:40.934938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:52.496 [2024-12-06 16:55:40.934943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:52.496 16:55:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2374256 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2374256 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2374256 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:53.436 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:53.436 rmmod nvme_tcp 00:28:53.696 rmmod nvme_fabrics 00:28:53.696 rmmod nvme_keyring 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2374081 ']' 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2374081 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2374081 ']' 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2374081 00:28:53.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2374081) - No such process 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2374081 is not found' 00:28:53.696 Process with pid 2374081 is not found 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.696 16:55:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.603 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:55.603 00:28:55.603 real 0m6.579s 00:28:55.603 user 0m14.344s 00:28:55.603 sys 0m0.976s 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:55.604 ************************************ 00:28:55.604 END TEST nvmf_shutdown_tc3 00:28:55.604 ************************************ 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:55.604 ************************************ 00:28:55.604 START TEST nvmf_shutdown_tc4 00:28:55.604 ************************************ 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:55.604 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:55.604 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:55.604 Found net devices under 0000:31:00.0: cvl_0_0 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.604 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:55.605 Found net devices under 0000:31:00.1: cvl_0_1 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:55.605 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:55.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:55.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:28:55.865 00:28:55.865 --- 10.0.0.2 ping statistics --- 00:28:55.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.865 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:55.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:55.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:55.865 00:28:55.865 --- 10.0.0.1 ping statistics --- 00:28:55.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:55.865 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2375592 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2375592 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2375592 ']' 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.865 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.125 [2024-12-06 16:55:44.590345] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:28:56.125 [2024-12-06 16:55:44.590402] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.125 [2024-12-06 16:55:44.668277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.125 [2024-12-06 16:55:44.690034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.125 [2024-12-06 16:55:44.690070] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.125 [2024-12-06 16:55:44.690077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.125 [2024-12-06 16:55:44.690082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.125 [2024-12-06 16:55:44.690086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.125 [2024-12-06 16:55:44.692001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.125 [2024-12-06 16:55:44.692167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.125 [2024-12-06 16:55:44.692344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.125 [2024-12-06 16:55:44.692345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.125 [2024-12-06 16:55:44.796121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.125 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.385 16:55:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.385 Malloc1 00:28:56.385 [2024-12-06 16:55:44.893778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.385 Malloc2 00:28:56.385 Malloc3 00:28:56.385 Malloc4 00:28:56.385 Malloc5 00:28:56.385 Malloc6 00:28:56.644 Malloc7 00:28:56.644 Malloc8 00:28:56.644 Malloc9 00:28:56.644 Malloc10 00:28:56.644 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.644 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:56.644 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.644 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:56.644 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2375960 00:28:56.644 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:56.644 16:55:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:56.644 [2024-12-06 16:55:45.315965] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2375592 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2375592 ']' 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2375592 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2375592 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2375592' 00:29:02.036 killing process with pid 2375592 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2375592 00:29:02.036 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2375592 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 [2024-12-06 16:55:50.338877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 [2024-12-06 16:55:50.339600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 starting I/O failed: -6 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.036 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 [2024-12-06 16:55:50.340252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 [2024-12-06 16:55:50.340925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1910 is same with the state(6) to be set 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 [2024-12-06 16:55:50.340953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1910 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.340960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1910 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.340965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1910 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.340970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1910 is same with the state(6) to be set 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 [2024-12-06 16:55:50.341182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1de0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.341207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1de0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.341213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1de0 is same with the state(6) to be set 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 [2024-12-06 16:55:50.341409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e22b0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.341432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e22b0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.341438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e22b0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.341444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e22b0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.341449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e22b0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.341505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.037 NVMe io qpair process completion error 00:29:02.037 [2024-12-06 16:55:50.342485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248db10 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215fd0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215fd0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215fd0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2215fd0 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d170 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d170 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d170 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d170 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d170 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d170 is same with the state(6) to be set 00:29:02.037 [2024-12-06 16:55:50.342754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d170 is same with the state(6) to be set 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.037 starting I/O failed: -6 00:29:02.037 Write completed with error (sct=0, sc=8) 00:29:02.038 [2024-12-06 16:55:50.344130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2b00 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2b00 is same with the state(6) to be set 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 [2024-12-06 16:55:50.344151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2b00 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2b00 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2b00 is same with the state(6) to be set 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 [2024-12-06 16:55:50.344166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2b00 is same with the state(6) to be set 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 [2024-12-06 16:55:50.344352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2fd0 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2fd0 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2fd0 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2fd0 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2fd0 is same with Write completed with error (sct=0, sc=8) 00:29:02.038 the state(6) to be set 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 [2024-12-06 16:55:50.344629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with Write completed with error (sct=0, sc=8) 00:29:02.038 the state(6) to be set 00:29:02.038 starting I/O failed: -6 00:29:02.038 [2024-12-06 16:55:50.344648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with Write completed with error (sct=0, sc=8) 00:29:02.038 the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with starting I/O failed: -6 00:29:02.038 the state(6) to be set 00:29:02.038 [2024-12-06 16:55:50.344666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with the state(6) to be set 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 [2024-12-06 16:55:50.344671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with the state(6) to be set 00:29:02.038 starting I/O failed: -6 00:29:02.038 [2024-12-06 16:55:50.344676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e34a0 is same with the state(6) to be set 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 [2024-12-06 16:55:50.345938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.038 NVMe io qpair process completion error 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 [2024-12-06 16:55:50.346862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.038 starting I/O failed: -6 00:29:02.038 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 [2024-12-06 16:55:50.347525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 [2024-12-06 16:55:50.348212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 [2024-12-06 16:55:50.349339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.039 NVMe io qpair process completion error 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 starting I/O failed: -6 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.039 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 [2024-12-06 16:55:50.350326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 [2024-12-06 16:55:50.350985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 [2024-12-06 16:55:50.351668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 [2024-12-06 16:55:50.353241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.040 NVMe io qpair process completion error 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 starting I/O failed: -6 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.040 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 [2024-12-06 16:55:50.354014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 [2024-12-06 16:55:50.354675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 [2024-12-06 16:55:50.355361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 [2024-12-06 16:55:50.357460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.041 NVMe io qpair process completion error 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 starting I/O failed: -6 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.041 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 [2024-12-06 16:55:50.358395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 [2024-12-06 16:55:50.359064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 [2024-12-06 16:55:50.359754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.042 starting I/O failed: -6 00:29:02.042 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 [2024-12-06 16:55:50.361027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.043 NVMe io qpair process completion error 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 [2024-12-06 16:55:50.361981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 [2024-12-06 16:55:50.362572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 [2024-12-06 16:55:50.363300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.043 starting I/O failed: -6 00:29:02.043 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 [2024-12-06 16:55:50.364593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.044 NVMe io qpair process completion error 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 [2024-12-06 16:55:50.365497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 [2024-12-06 16:55:50.366147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 [2024-12-06 16:55:50.366832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.044 Write completed with error (sct=0, sc=8) 00:29:02.044 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 [2024-12-06 16:55:50.368543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.045 NVMe io qpair process completion error 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 [2024-12-06 16:55:50.369322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 [2024-12-06 16:55:50.369958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.045 starting I/O failed: -6 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 [2024-12-06 16:55:50.370671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.045 starting I/O failed: -6 00:29:02.045 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 [2024-12-06 16:55:50.372172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.046 NVMe io qpair process completion error 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 [2024-12-06 16:55:50.373123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 [2024-12-06 16:55:50.373817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 [2024-12-06 16:55:50.374533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.046 starting I/O failed: -6 00:29:02.046 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 Write completed with error (sct=0, sc=8) 00:29:02.047 starting I/O failed: -6 00:29:02.047 [2024-12-06 16:55:50.376025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:02.047 NVMe io qpair process completion error 00:29:02.047 Initializing NVMe Controllers 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.047 Controller IO queue size 128, less than required. 00:29:02.047 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:02.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:02.047 Initialization complete. Launching workers. 00:29:02.047 ======================================================== 00:29:02.047 Latency(us) 00:29:02.047 Device Information : IOPS MiB/s Average min max 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2579.10 110.82 49652.01 452.21 82884.65 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2586.73 111.15 49514.91 419.09 83211.50 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2559.20 109.97 50058.98 658.44 82906.15 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2604.94 111.93 49193.70 658.99 82234.57 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2587.57 111.18 49545.08 680.87 98674.01 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2608.32 112.08 49163.19 605.62 97656.07 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2662.53 114.41 48173.68 610.21 90319.08 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2652.79 113.99 48365.32 659.32 92237.91 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2635.00 113.22 48707.32 342.94 82684.12 00:29:02.047 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2570.84 110.47 49536.73 674.11 83489.00 00:29:02.047 ======================================================== 00:29:02.047 Total : 26047.04 1119.21 49183.87 342.94 98674.01 00:29:02.047 00:29:02.047 [2024-12-06 16:55:50.378747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6390 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5a00 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f53a0 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f56d0 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2486f00 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5d30 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f66c0 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6060 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2481ff0 is same with the state(6) to be set 00:29:02.047 [2024-12-06 16:55:50.378954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5070 is same with the state(6) to be set 00:29:02.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:02.047 16:55:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2375960 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2375960 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2375960 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.986 rmmod nvme_tcp 00:29:02.986 rmmod nvme_fabrics 00:29:02.986 rmmod nvme_keyring 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2375592 ']' 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2375592 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2375592 ']' 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2375592 00:29:02.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2375592) - No such process 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2375592 is not found' 00:29:02.986 Process with pid 2375592 is not found 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.986 16:55:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.523 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.523 00:29:05.523 real 0m9.394s 00:29:05.523 user 0m24.764s 00:29:05.523 sys 0m3.918s 00:29:05.523 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.523 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:05.523 ************************************ 00:29:05.523 END TEST nvmf_shutdown_tc4 00:29:05.524 ************************************ 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:05.524 00:29:05.524 real 0m36.888s 00:29:05.524 user 1m29.616s 00:29:05.524 sys 0m10.950s 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:05.524 ************************************ 00:29:05.524 END TEST nvmf_shutdown 00:29:05.524 ************************************ 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:05.524 ************************************ 00:29:05.524 START TEST nvmf_nsid 00:29:05.524 ************************************ 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:05.524 * Looking for test storage... 00:29:05.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.524 --rc genhtml_branch_coverage=1 00:29:05.524 --rc genhtml_function_coverage=1 00:29:05.524 --rc genhtml_legend=1 00:29:05.524 --rc geninfo_all_blocks=1 00:29:05.524 --rc geninfo_unexecuted_blocks=1 00:29:05.524 00:29:05.524 ' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.524 --rc genhtml_branch_coverage=1 00:29:05.524 --rc genhtml_function_coverage=1 00:29:05.524 --rc genhtml_legend=1 00:29:05.524 --rc geninfo_all_blocks=1 00:29:05.524 --rc geninfo_unexecuted_blocks=1 00:29:05.524 00:29:05.524 ' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.524 --rc genhtml_branch_coverage=1 00:29:05.524 --rc genhtml_function_coverage=1 00:29:05.524 --rc genhtml_legend=1 00:29:05.524 --rc geninfo_all_blocks=1 00:29:05.524 --rc geninfo_unexecuted_blocks=1 00:29:05.524 00:29:05.524 ' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.524 --rc genhtml_branch_coverage=1 00:29:05.524 --rc genhtml_function_coverage=1 00:29:05.524 --rc genhtml_legend=1 00:29:05.524 --rc geninfo_all_blocks=1 00:29:05.524 --rc geninfo_unexecuted_blocks=1 00:29:05.524 00:29:05.524 ' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:05.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.524 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.525 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.525 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.525 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.525 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.525 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.525 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.525 16:55:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.800 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:10.801 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:10.801 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:10.801 Found net devices under 0000:31:00.0: cvl_0_0 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:10.801 Found net devices under 0000:31:00.1: cvl_0_1 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.801 16:55:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:29:10.801 00:29:10.801 --- 10.0.0.2 ping statistics --- 00:29:10.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.801 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:29:10.801 00:29:10.801 --- 10.0.0.1 ping statistics --- 00:29:10.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.801 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2381644 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2381644 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2381644 ']' 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.801 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.801 [2024-12-06 16:55:59.235267] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:10.801 [2024-12-06 16:55:59.235317] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.801 [2024-12-06 16:55:59.318349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.801 [2024-12-06 16:55:59.335445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.801 [2024-12-06 16:55:59.335478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.801 [2024-12-06 16:55:59.335486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.801 [2024-12-06 16:55:59.335493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.802 [2024-12-06 16:55:59.335499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.802 [2024-12-06 16:55:59.336059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2381665 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=b16f299e-8a1e-4799-a1b7-652f87fa1ff8 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=6300eb4c-c68b-41fb-92de-bea960b2d4b1 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=245c868b-5deb-4d0d-ab02-784158497f14 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.802 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:10.802 null0 00:29:10.802 null1 00:29:10.802 [2024-12-06 16:55:59.470888] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:10.802 [2024-12-06 16:55:59.470937] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2381665 ] 00:29:10.802 null2 00:29:10.802 [2024-12-06 16:55:59.481610] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.063 [2024-12-06 16:55:59.505824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2381665 /var/tmp/tgt2.sock 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2381665 ']' 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:11.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:11.063 [2024-12-06 16:55:59.547530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.063 [2024-12-06 16:55:59.566859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:11.063 16:55:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:11.634 [2024-12-06 16:56:00.039517] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.634 [2024-12-06 16:56:00.055710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:11.634 nvme0n1 nvme0n2 00:29:11.634 nvme1n1 00:29:11.634 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:11.634 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:11.634 16:56:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:13.016 16:56:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid b16f299e-8a1e-4799-a1b7-652f87fa1ff8 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b16f299e8a1e4799a1b7652f87fa1ff8 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B16F299E8A1E4799A1B7652F87FA1FF8 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ B16F299E8A1E4799A1B7652F87FA1FF8 == \B\1\6\F\2\9\9\E\8\A\1\E\4\7\9\9\A\1\B\7\6\5\2\F\8\7\F\A\1\F\F\8 ]] 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 6300eb4c-c68b-41fb-92de-bea960b2d4b1 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:13.954 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6300eb4cc68b41fb92debea960b2d4b1 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6300EB4CC68B41FB92DEBEA960B2D4B1 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 6300EB4CC68B41FB92DEBEA960B2D4B1 == \6\3\0\0\E\B\4\C\C\6\8\B\4\1\F\B\9\2\D\E\B\E\A\9\6\0\B\2\D\4\B\1 ]] 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 245c868b-5deb-4d0d-ab02-784158497f14 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:13.955 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:14.214 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=245c868b5deb4d0dab02784158497f14 00:29:14.214 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 245C868B5DEB4D0DAB02784158497F14 00:29:14.214 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 245C868B5DEB4D0DAB02784158497F14 == \2\4\5\C\8\6\8\B\5\D\E\B\4\D\0\D\A\B\0\2\7\8\4\1\5\8\4\9\7\F\1\4 ]] 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2381665 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2381665 ']' 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2381665 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381665 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381665' 00:29:14.215 killing process with pid 2381665 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2381665 00:29:14.215 16:56:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2381665 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:14.474 rmmod nvme_tcp 00:29:14.474 rmmod nvme_fabrics 00:29:14.474 rmmod nvme_keyring 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2381644 ']' 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2381644 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2381644 ']' 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2381644 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:14.474 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2381644 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2381644' 00:29:14.733 killing process with pid 2381644 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2381644 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2381644 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.733 16:56:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.263 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.263 00:29:17.263 real 0m11.586s 00:29:17.263 user 0m9.056s 00:29:17.263 sys 0m4.937s 00:29:17.263 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.263 16:56:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:17.263 ************************************ 00:29:17.263 END TEST nvmf_nsid 00:29:17.263 ************************************ 00:29:17.263 16:56:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:17.263 00:29:17.263 real 18m3.641s 00:29:17.263 user 49m11.167s 00:29:17.263 sys 4m0.514s 00:29:17.263 16:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:17.263 16:56:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:17.263 ************************************ 00:29:17.263 END TEST nvmf_target_extra 00:29:17.263 ************************************ 00:29:17.263 16:56:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:17.264 16:56:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:17.264 16:56:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.264 16:56:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.264 ************************************ 00:29:17.264 START TEST nvmf_host 00:29:17.264 ************************************ 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:17.264 * Looking for test storage... 00:29:17.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:17.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.264 --rc genhtml_branch_coverage=1 00:29:17.264 --rc genhtml_function_coverage=1 00:29:17.264 --rc genhtml_legend=1 00:29:17.264 --rc geninfo_all_blocks=1 00:29:17.264 --rc geninfo_unexecuted_blocks=1 00:29:17.264 00:29:17.264 ' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:17.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.264 --rc genhtml_branch_coverage=1 00:29:17.264 --rc genhtml_function_coverage=1 00:29:17.264 --rc genhtml_legend=1 00:29:17.264 --rc geninfo_all_blocks=1 00:29:17.264 --rc geninfo_unexecuted_blocks=1 00:29:17.264 00:29:17.264 ' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:17.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.264 --rc genhtml_branch_coverage=1 00:29:17.264 --rc genhtml_function_coverage=1 00:29:17.264 --rc genhtml_legend=1 00:29:17.264 --rc geninfo_all_blocks=1 00:29:17.264 --rc geninfo_unexecuted_blocks=1 00:29:17.264 00:29:17.264 ' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:17.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.264 --rc genhtml_branch_coverage=1 00:29:17.264 --rc genhtml_function_coverage=1 00:29:17.264 --rc genhtml_legend=1 00:29:17.264 --rc geninfo_all_blocks=1 00:29:17.264 --rc geninfo_unexecuted_blocks=1 00:29:17.264 00:29:17.264 ' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.264 16:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.264 ************************************ 00:29:17.264 START TEST nvmf_multicontroller 00:29:17.264 ************************************ 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:17.265 * Looking for test storage... 00:29:17.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:17.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.265 --rc genhtml_branch_coverage=1 00:29:17.265 --rc genhtml_function_coverage=1 00:29:17.265 --rc genhtml_legend=1 00:29:17.265 --rc geninfo_all_blocks=1 00:29:17.265 --rc geninfo_unexecuted_blocks=1 00:29:17.265 00:29:17.265 ' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:17.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.265 --rc genhtml_branch_coverage=1 00:29:17.265 --rc genhtml_function_coverage=1 00:29:17.265 --rc genhtml_legend=1 00:29:17.265 --rc geninfo_all_blocks=1 00:29:17.265 --rc geninfo_unexecuted_blocks=1 00:29:17.265 00:29:17.265 ' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:17.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.265 --rc genhtml_branch_coverage=1 00:29:17.265 --rc genhtml_function_coverage=1 00:29:17.265 --rc genhtml_legend=1 00:29:17.265 --rc geninfo_all_blocks=1 00:29:17.265 --rc geninfo_unexecuted_blocks=1 00:29:17.265 00:29:17.265 ' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:17.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.265 --rc genhtml_branch_coverage=1 00:29:17.265 --rc genhtml_function_coverage=1 00:29:17.265 --rc genhtml_legend=1 00:29:17.265 --rc geninfo_all_blocks=1 00:29:17.265 --rc geninfo_unexecuted_blocks=1 00:29:17.265 00:29:17.265 ' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:17.265 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.266 16:56:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:22.543 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:22.543 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:22.543 Found net devices under 0000:31:00.0: cvl_0_0 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:22.543 Found net devices under 0000:31:00.1: cvl_0_1 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.543 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:29:22.544 00:29:22.544 --- 10.0.0.2 ping statistics --- 00:29:22.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.544 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:29:22.544 00:29:22.544 --- 10.0.0.1 ping statistics --- 00:29:22.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.544 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2386783 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2386783 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2386783 ']' 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.544 16:56:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:22.544 [2024-12-06 16:56:11.009838] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:22.544 [2024-12-06 16:56:11.009891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.544 [2024-12-06 16:56:11.095370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:22.544 [2024-12-06 16:56:11.115563] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.544 [2024-12-06 16:56:11.115601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.544 [2024-12-06 16:56:11.115609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.544 [2024-12-06 16:56:11.115617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.544 [2024-12-06 16:56:11.115624] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.544 [2024-12-06 16:56:11.117167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.544 [2024-12-06 16:56:11.117301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.544 [2024-12-06 16:56:11.117302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.114 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.114 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:23.114 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:23.114 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.114 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 [2024-12-06 16:56:11.819598] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 Malloc0 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 [2024-12-06 16:56:11.869542] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 [2024-12-06 16:56:11.877464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 Malloc1 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.374 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2387131 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2387131 /var/tmp/bdevperf.sock 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2387131 ']' 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.375 16:56:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.634 NVMe0n1 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.634 1 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.634 request: 00:29:23.634 { 00:29:23.634 "name": "NVMe0", 00:29:23.634 "trtype": "tcp", 00:29:23.634 "traddr": "10.0.0.2", 00:29:23.634 "adrfam": "ipv4", 00:29:23.634 "trsvcid": "4420", 00:29:23.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.634 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:23.634 "hostaddr": "10.0.0.1", 00:29:23.634 "prchk_reftag": false, 00:29:23.634 "prchk_guard": false, 00:29:23.634 "hdgst": false, 00:29:23.634 "ddgst": false, 00:29:23.634 "allow_unrecognized_csi": false, 00:29:23.634 "method": "bdev_nvme_attach_controller", 00:29:23.634 "req_id": 1 00:29:23.634 } 00:29:23.634 Got JSON-RPC error response 00:29:23.634 response: 00:29:23.634 { 00:29:23.634 "code": -114, 00:29:23.634 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:23.634 } 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.634 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.894 request: 00:29:23.894 { 00:29:23.894 "name": "NVMe0", 00:29:23.894 "trtype": "tcp", 00:29:23.894 "traddr": "10.0.0.2", 00:29:23.894 "adrfam": "ipv4", 00:29:23.894 "trsvcid": "4420", 00:29:23.894 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:23.894 "hostaddr": "10.0.0.1", 00:29:23.894 "prchk_reftag": false, 00:29:23.894 "prchk_guard": false, 00:29:23.894 "hdgst": false, 00:29:23.894 "ddgst": false, 00:29:23.894 "allow_unrecognized_csi": false, 00:29:23.894 "method": "bdev_nvme_attach_controller", 00:29:23.894 "req_id": 1 00:29:23.894 } 00:29:23.894 Got JSON-RPC error response 00:29:23.894 response: 00:29:23.894 { 00:29:23.894 "code": -114, 00:29:23.894 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:23.894 } 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.894 request: 00:29:23.894 { 00:29:23.894 "name": "NVMe0", 00:29:23.894 "trtype": "tcp", 00:29:23.894 "traddr": "10.0.0.2", 00:29:23.894 "adrfam": "ipv4", 00:29:23.894 "trsvcid": "4420", 00:29:23.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.894 "hostaddr": "10.0.0.1", 00:29:23.894 "prchk_reftag": false, 00:29:23.894 "prchk_guard": false, 00:29:23.894 "hdgst": false, 00:29:23.894 "ddgst": false, 00:29:23.894 "multipath": "disable", 00:29:23.894 "allow_unrecognized_csi": false, 00:29:23.894 "method": "bdev_nvme_attach_controller", 00:29:23.894 "req_id": 1 00:29:23.894 } 00:29:23.894 Got JSON-RPC error response 00:29:23.894 response: 00:29:23.894 { 00:29:23.894 "code": -114, 00:29:23.894 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:23.894 } 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.894 request: 00:29:23.894 { 00:29:23.894 "name": "NVMe0", 00:29:23.894 "trtype": "tcp", 00:29:23.894 "traddr": "10.0.0.2", 00:29:23.894 "adrfam": "ipv4", 00:29:23.894 "trsvcid": "4420", 00:29:23.894 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.894 "hostaddr": "10.0.0.1", 00:29:23.894 "prchk_reftag": false, 00:29:23.894 "prchk_guard": false, 00:29:23.894 "hdgst": false, 00:29:23.894 "ddgst": false, 00:29:23.894 "multipath": "failover", 00:29:23.894 "allow_unrecognized_csi": false, 00:29:23.894 "method": "bdev_nvme_attach_controller", 00:29:23.894 "req_id": 1 00:29:23.894 } 00:29:23.894 Got JSON-RPC error response 00:29:23.894 response: 00:29:23.894 { 00:29:23.894 "code": -114, 00:29:23.894 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:23.894 } 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:23.894 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.895 NVMe0n1 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.895 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.153 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:24.154 16:56:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:25.088 { 00:29:25.088 "results": [ 00:29:25.088 { 00:29:25.088 "job": "NVMe0n1", 00:29:25.088 "core_mask": "0x1", 00:29:25.088 "workload": "write", 00:29:25.088 "status": "finished", 00:29:25.088 "queue_depth": 128, 00:29:25.088 "io_size": 4096, 00:29:25.088 "runtime": 1.006282, 00:29:25.088 "iops": 29231.36854281404, 00:29:25.088 "mibps": 114.18503337036735, 00:29:25.088 "io_failed": 0, 00:29:25.088 "io_timeout": 0, 00:29:25.088 "avg_latency_us": 4368.406395829792, 00:29:25.088 "min_latency_us": 2116.266666666667, 00:29:25.088 "max_latency_us": 15728.64 00:29:25.088 } 00:29:25.088 ], 00:29:25.088 "core_count": 1 00:29:25.088 } 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2387131 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2387131 ']' 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2387131 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2387131 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2387131' 00:29:25.347 killing process with pid 2387131 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2387131 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2387131 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.347 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:25.348 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:25.348 [2024-12-06 16:56:11.960147] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:25.348 [2024-12-06 16:56:11.960200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2387131 ] 00:29:25.348 [2024-12-06 16:56:12.038121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.348 [2024-12-06 16:56:12.056793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.348 [2024-12-06 16:56:12.670970] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 2acfa0ac-07d7-4508-9fdc-2b9439dcda6c already exists 00:29:25.348 [2024-12-06 16:56:12.670996] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:2acfa0ac-07d7-4508-9fdc-2b9439dcda6c alias for bdev NVMe1n1 00:29:25.348 [2024-12-06 16:56:12.671005] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:25.348 Running I/O for 1 seconds... 00:29:25.348 29224.00 IOPS, 114.16 MiB/s 00:29:25.348 Latency(us) 00:29:25.348 [2024-12-06T15:56:14.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.348 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:25.348 NVMe0n1 : 1.01 29231.37 114.19 0.00 0.00 4368.41 2116.27 15728.64 00:29:25.348 [2024-12-06T15:56:14.041Z] =================================================================================================================== 00:29:25.348 [2024-12-06T15:56:14.041Z] Total : 29231.37 114.19 0.00 0.00 4368.41 2116.27 15728.64 00:29:25.348 Received shutdown signal, test time was about 1.000000 seconds 00:29:25.348 00:29:25.348 Latency(us) 00:29:25.348 [2024-12-06T15:56:14.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.348 [2024-12-06T15:56:14.041Z] =================================================================================================================== 00:29:25.348 [2024-12-06T15:56:14.041Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.348 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.348 16:56:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.348 rmmod nvme_tcp 00:29:25.348 rmmod nvme_fabrics 00:29:25.348 rmmod nvme_keyring 00:29:25.606 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.606 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:25.606 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:25.606 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2386783 ']' 00:29:25.606 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2386783 00:29:25.606 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2386783 ']' 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2386783 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2386783 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2386783' 00:29:25.607 killing process with pid 2386783 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2386783 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2386783 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:25.607 16:56:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.138 00:29:28.138 real 0m10.706s 00:29:28.138 user 0m12.894s 00:29:28.138 sys 0m4.551s 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.138 ************************************ 00:29:28.138 END TEST nvmf_multicontroller 00:29:28.138 ************************************ 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.138 ************************************ 00:29:28.138 START TEST nvmf_aer 00:29:28.138 ************************************ 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:28.138 * Looking for test storage... 00:29:28.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.138 --rc genhtml_branch_coverage=1 00:29:28.138 --rc genhtml_function_coverage=1 00:29:28.138 --rc genhtml_legend=1 00:29:28.138 --rc geninfo_all_blocks=1 00:29:28.138 --rc geninfo_unexecuted_blocks=1 00:29:28.138 00:29:28.138 ' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.138 --rc genhtml_branch_coverage=1 00:29:28.138 --rc genhtml_function_coverage=1 00:29:28.138 --rc genhtml_legend=1 00:29:28.138 --rc geninfo_all_blocks=1 00:29:28.138 --rc geninfo_unexecuted_blocks=1 00:29:28.138 00:29:28.138 ' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.138 --rc genhtml_branch_coverage=1 00:29:28.138 --rc genhtml_function_coverage=1 00:29:28.138 --rc genhtml_legend=1 00:29:28.138 --rc geninfo_all_blocks=1 00:29:28.138 --rc geninfo_unexecuted_blocks=1 00:29:28.138 00:29:28.138 ' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.138 --rc genhtml_branch_coverage=1 00:29:28.138 --rc genhtml_function_coverage=1 00:29:28.138 --rc genhtml_legend=1 00:29:28.138 --rc geninfo_all_blocks=1 00:29:28.138 --rc geninfo_unexecuted_blocks=1 00:29:28.138 00:29:28.138 ' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.138 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.139 16:56:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:33.417 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.417 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:33.418 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:33.418 Found net devices under 0000:31:00.0: cvl_0_0 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:33.418 Found net devices under 0000:31:00.1: cvl_0_1 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:29:33.418 00:29:33.418 --- 10.0.0.2 ping statistics --- 00:29:33.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.418 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:29:33.418 00:29:33.418 --- 10.0.0.1 ping statistics --- 00:29:33.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.418 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2391821 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2391821 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2391821 ']' 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.418 16:56:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:33.418 [2024-12-06 16:56:21.700204] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:33.418 [2024-12-06 16:56:21.700259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.418 [2024-12-06 16:56:21.786137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.418 [2024-12-06 16:56:21.808465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.418 [2024-12-06 16:56:21.808504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.418 [2024-12-06 16:56:21.808513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.418 [2024-12-06 16:56:21.808519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.418 [2024-12-06 16:56:21.808525] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.418 [2024-12-06 16:56:21.810161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.418 [2024-12-06 16:56:21.810306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.418 [2024-12-06 16:56:21.810451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.418 [2024-12-06 16:56:21.810452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.991 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.992 [2024-12-06 16:56:22.537234] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.992 Malloc0 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.992 [2024-12-06 16:56:22.602014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:33.992 [ 00:29:33.992 { 00:29:33.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:33.992 "subtype": "Discovery", 00:29:33.992 "listen_addresses": [], 00:29:33.992 "allow_any_host": true, 00:29:33.992 "hosts": [] 00:29:33.992 }, 00:29:33.992 { 00:29:33.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:33.992 "subtype": "NVMe", 00:29:33.992 "listen_addresses": [ 00:29:33.992 { 00:29:33.992 "trtype": "TCP", 00:29:33.992 "adrfam": "IPv4", 00:29:33.992 "traddr": "10.0.0.2", 00:29:33.992 "trsvcid": "4420" 00:29:33.992 } 00:29:33.992 ], 00:29:33.992 "allow_any_host": true, 00:29:33.992 "hosts": [], 00:29:33.992 "serial_number": "SPDK00000000000001", 00:29:33.992 "model_number": "SPDK bdev Controller", 00:29:33.992 "max_namespaces": 2, 00:29:33.992 "min_cntlid": 1, 00:29:33.992 "max_cntlid": 65519, 00:29:33.992 "namespaces": [ 00:29:33.992 { 00:29:33.992 "nsid": 1, 00:29:33.992 "bdev_name": "Malloc0", 00:29:33.992 "name": "Malloc0", 00:29:33.992 "nguid": "EF3296C1B1ED47A8B0872DDCB7F06336", 00:29:33.992 "uuid": "ef3296c1-b1ed-47a8-b087-2ddcb7f06336" 00:29:33.992 } 00:29:33.992 ] 00:29:33.992 } 00:29:33.992 ] 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2392043 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:33.992 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.254 Malloc1 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.254 Asynchronous Event Request test 00:29:34.254 Attaching to 10.0.0.2 00:29:34.254 Attached to 10.0.0.2 00:29:34.254 Registering asynchronous event callbacks... 00:29:34.254 Starting namespace attribute notice tests for all controllers... 00:29:34.254 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:34.254 aer_cb - Changed Namespace 00:29:34.254 Cleaning up... 00:29:34.254 [ 00:29:34.254 { 00:29:34.254 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:34.254 "subtype": "Discovery", 00:29:34.254 "listen_addresses": [], 00:29:34.254 "allow_any_host": true, 00:29:34.254 "hosts": [] 00:29:34.254 }, 00:29:34.254 { 00:29:34.254 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.254 "subtype": "NVMe", 00:29:34.254 "listen_addresses": [ 00:29:34.254 { 00:29:34.254 "trtype": "TCP", 00:29:34.254 "adrfam": "IPv4", 00:29:34.254 "traddr": "10.0.0.2", 00:29:34.254 "trsvcid": "4420" 00:29:34.254 } 00:29:34.254 ], 00:29:34.254 "allow_any_host": true, 00:29:34.254 "hosts": [], 00:29:34.254 "serial_number": "SPDK00000000000001", 00:29:34.254 "model_number": "SPDK bdev Controller", 00:29:34.254 "max_namespaces": 2, 00:29:34.254 "min_cntlid": 1, 00:29:34.254 "max_cntlid": 65519, 00:29:34.254 "namespaces": [ 00:29:34.254 { 00:29:34.254 "nsid": 1, 00:29:34.254 "bdev_name": "Malloc0", 00:29:34.254 "name": "Malloc0", 00:29:34.254 "nguid": "EF3296C1B1ED47A8B0872DDCB7F06336", 00:29:34.254 "uuid": "ef3296c1-b1ed-47a8-b087-2ddcb7f06336" 00:29:34.254 }, 00:29:34.254 { 00:29:34.254 "nsid": 2, 00:29:34.254 "bdev_name": "Malloc1", 00:29:34.254 "name": "Malloc1", 00:29:34.254 "nguid": "6200A8421F8F44C194C5252870BB28D5", 00:29:34.254 "uuid": "6200a842-1f8f-44c1-94c5-252870bb28d5" 00:29:34.254 } 00:29:34.254 ] 00:29:34.254 } 00:29:34.254 ] 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2392043 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:34.254 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.255 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.255 rmmod nvme_tcp 00:29:34.514 rmmod nvme_fabrics 00:29:34.514 rmmod nvme_keyring 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2391821 ']' 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2391821 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2391821 ']' 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2391821 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.514 16:56:22 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391821 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391821' 00:29:34.514 killing process with pid 2391821 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2391821 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2391821 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.514 16:56:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.056 00:29:37.056 real 0m8.890s 00:29:37.056 user 0m6.823s 00:29:37.056 sys 0m4.288s 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:37.056 ************************************ 00:29:37.056 END TEST nvmf_aer 00:29:37.056 ************************************ 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.056 ************************************ 00:29:37.056 START TEST nvmf_async_init 00:29:37.056 ************************************ 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:37.056 * Looking for test storage... 00:29:37.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:37.056 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:37.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.057 --rc genhtml_branch_coverage=1 00:29:37.057 --rc genhtml_function_coverage=1 00:29:37.057 --rc genhtml_legend=1 00:29:37.057 --rc geninfo_all_blocks=1 00:29:37.057 --rc geninfo_unexecuted_blocks=1 00:29:37.057 00:29:37.057 ' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:37.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.057 --rc genhtml_branch_coverage=1 00:29:37.057 --rc genhtml_function_coverage=1 00:29:37.057 --rc genhtml_legend=1 00:29:37.057 --rc geninfo_all_blocks=1 00:29:37.057 --rc geninfo_unexecuted_blocks=1 00:29:37.057 00:29:37.057 ' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:37.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.057 --rc genhtml_branch_coverage=1 00:29:37.057 --rc genhtml_function_coverage=1 00:29:37.057 --rc genhtml_legend=1 00:29:37.057 --rc geninfo_all_blocks=1 00:29:37.057 --rc geninfo_unexecuted_blocks=1 00:29:37.057 00:29:37.057 ' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:37.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.057 --rc genhtml_branch_coverage=1 00:29:37.057 --rc genhtml_function_coverage=1 00:29:37.057 --rc genhtml_legend=1 00:29:37.057 --rc geninfo_all_blocks=1 00:29:37.057 --rc geninfo_unexecuted_blocks=1 00:29:37.057 00:29:37.057 ' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:37.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=96e1953b650a4369a0ababb70eccbb50 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.057 16:56:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:42.333 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.333 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:42.334 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:42.334 Found net devices under 0000:31:00.0: cvl_0_0 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:42.334 Found net devices under 0000:31:00.1: cvl_0_1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:42.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:42.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:29:42.334 00:29:42.334 --- 10.0.0.2 ping statistics --- 00:29:42.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.334 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:42.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:42.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:29:42.334 00:29:42.334 --- 10.0.0.1 ping statistics --- 00:29:42.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:42.334 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2396513 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2396513 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2396513 ']' 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.334 16:56:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:42.334 [2024-12-06 16:56:30.901930] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:42.334 [2024-12-06 16:56:30.901994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.334 [2024-12-06 16:56:30.993482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.334 [2024-12-06 16:56:31.020069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.334 [2024-12-06 16:56:31.020125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.334 [2024-12-06 16:56:31.020133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:42.334 [2024-12-06 16:56:31.020140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:42.334 [2024-12-06 16:56:31.020148] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.334 [2024-12-06 16:56:31.020892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 [2024-12-06 16:56:31.724869] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 null0 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 96e1953b650a4369a0ababb70eccbb50 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.270 [2024-12-06 16:56:31.765129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.270 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.528 nvme0n1 00:29:43.528 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.528 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:43.528 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.528 16:56:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.528 [ 00:29:43.528 { 00:29:43.528 "name": "nvme0n1", 00:29:43.528 "aliases": [ 00:29:43.528 "96e1953b-650a-4369-a0ab-abb70eccbb50" 00:29:43.528 ], 00:29:43.528 "product_name": "NVMe disk", 00:29:43.528 "block_size": 512, 00:29:43.528 "num_blocks": 2097152, 00:29:43.528 "uuid": "96e1953b-650a-4369-a0ab-abb70eccbb50", 00:29:43.528 "numa_id": 0, 00:29:43.528 "assigned_rate_limits": { 00:29:43.528 "rw_ios_per_sec": 0, 00:29:43.528 "rw_mbytes_per_sec": 0, 00:29:43.528 "r_mbytes_per_sec": 0, 00:29:43.528 "w_mbytes_per_sec": 0 00:29:43.528 }, 00:29:43.528 "claimed": false, 00:29:43.528 "zoned": false, 00:29:43.528 "supported_io_types": { 00:29:43.528 "read": true, 00:29:43.528 "write": true, 00:29:43.528 "unmap": false, 00:29:43.528 "flush": true, 00:29:43.528 "reset": true, 00:29:43.528 "nvme_admin": true, 00:29:43.528 "nvme_io": true, 00:29:43.528 "nvme_io_md": false, 00:29:43.528 "write_zeroes": true, 00:29:43.528 "zcopy": false, 00:29:43.528 "get_zone_info": false, 00:29:43.528 "zone_management": false, 00:29:43.528 "zone_append": false, 00:29:43.528 "compare": true, 00:29:43.528 "compare_and_write": true, 00:29:43.528 "abort": true, 00:29:43.528 "seek_hole": false, 00:29:43.528 "seek_data": false, 00:29:43.528 "copy": true, 00:29:43.528 "nvme_iov_md": false 00:29:43.528 }, 00:29:43.529 "memory_domains": [ 00:29:43.529 { 00:29:43.529 "dma_device_id": "system", 00:29:43.529 "dma_device_type": 1 00:29:43.529 } 00:29:43.529 ], 00:29:43.529 "driver_specific": { 00:29:43.529 "nvme": [ 00:29:43.529 { 00:29:43.529 "trid": { 00:29:43.529 "trtype": "TCP", 00:29:43.529 "adrfam": "IPv4", 00:29:43.529 "traddr": "10.0.0.2", 00:29:43.529 "trsvcid": "4420", 00:29:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:43.529 }, 00:29:43.529 "ctrlr_data": { 00:29:43.529 "cntlid": 1, 00:29:43.529 "vendor_id": "0x8086", 00:29:43.529 "model_number": "SPDK bdev Controller", 00:29:43.529 "serial_number": "00000000000000000000", 00:29:43.529 "firmware_revision": "25.01", 00:29:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:43.529 "oacs": { 00:29:43.529 "security": 0, 00:29:43.529 "format": 0, 00:29:43.529 "firmware": 0, 00:29:43.529 "ns_manage": 0 00:29:43.529 }, 00:29:43.529 "multi_ctrlr": true, 00:29:43.529 "ana_reporting": false 00:29:43.529 }, 00:29:43.529 "vs": { 00:29:43.529 "nvme_version": "1.3" 00:29:43.529 }, 00:29:43.529 "ns_data": { 00:29:43.529 "id": 1, 00:29:43.529 "can_share": true 00:29:43.529 } 00:29:43.529 } 00:29:43.529 ], 00:29:43.529 "mp_policy": "active_passive" 00:29:43.529 } 00:29:43.529 } 00:29:43.529 ] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 [2024-12-06 16:56:32.014251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:43.529 [2024-12-06 16:56:32.014316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02040 (9): Bad file descriptor 00:29:43.529 [2024-12-06 16:56:32.146207] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 [ 00:29:43.529 { 00:29:43.529 "name": "nvme0n1", 00:29:43.529 "aliases": [ 00:29:43.529 "96e1953b-650a-4369-a0ab-abb70eccbb50" 00:29:43.529 ], 00:29:43.529 "product_name": "NVMe disk", 00:29:43.529 "block_size": 512, 00:29:43.529 "num_blocks": 2097152, 00:29:43.529 "uuid": "96e1953b-650a-4369-a0ab-abb70eccbb50", 00:29:43.529 "numa_id": 0, 00:29:43.529 "assigned_rate_limits": { 00:29:43.529 "rw_ios_per_sec": 0, 00:29:43.529 "rw_mbytes_per_sec": 0, 00:29:43.529 "r_mbytes_per_sec": 0, 00:29:43.529 "w_mbytes_per_sec": 0 00:29:43.529 }, 00:29:43.529 "claimed": false, 00:29:43.529 "zoned": false, 00:29:43.529 "supported_io_types": { 00:29:43.529 "read": true, 00:29:43.529 "write": true, 00:29:43.529 "unmap": false, 00:29:43.529 "flush": true, 00:29:43.529 "reset": true, 00:29:43.529 "nvme_admin": true, 00:29:43.529 "nvme_io": true, 00:29:43.529 "nvme_io_md": false, 00:29:43.529 "write_zeroes": true, 00:29:43.529 "zcopy": false, 00:29:43.529 "get_zone_info": false, 00:29:43.529 "zone_management": false, 00:29:43.529 "zone_append": false, 00:29:43.529 "compare": true, 00:29:43.529 "compare_and_write": true, 00:29:43.529 "abort": true, 00:29:43.529 "seek_hole": false, 00:29:43.529 "seek_data": false, 00:29:43.529 "copy": true, 00:29:43.529 "nvme_iov_md": false 00:29:43.529 }, 00:29:43.529 "memory_domains": [ 00:29:43.529 { 00:29:43.529 "dma_device_id": "system", 00:29:43.529 "dma_device_type": 1 00:29:43.529 } 00:29:43.529 ], 00:29:43.529 "driver_specific": { 00:29:43.529 "nvme": [ 00:29:43.529 { 00:29:43.529 "trid": { 00:29:43.529 "trtype": "TCP", 00:29:43.529 "adrfam": "IPv4", 00:29:43.529 "traddr": "10.0.0.2", 00:29:43.529 "trsvcid": "4420", 00:29:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:43.529 }, 00:29:43.529 "ctrlr_data": { 00:29:43.529 "cntlid": 2, 00:29:43.529 "vendor_id": "0x8086", 00:29:43.529 "model_number": "SPDK bdev Controller", 00:29:43.529 "serial_number": "00000000000000000000", 00:29:43.529 "firmware_revision": "25.01", 00:29:43.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:43.529 "oacs": { 00:29:43.529 "security": 0, 00:29:43.529 "format": 0, 00:29:43.529 "firmware": 0, 00:29:43.529 "ns_manage": 0 00:29:43.529 }, 00:29:43.529 "multi_ctrlr": true, 00:29:43.529 "ana_reporting": false 00:29:43.529 }, 00:29:43.529 "vs": { 00:29:43.529 "nvme_version": "1.3" 00:29:43.529 }, 00:29:43.529 "ns_data": { 00:29:43.529 "id": 1, 00:29:43.529 "can_share": true 00:29:43.529 } 00:29:43.529 } 00:29:43.529 ], 00:29:43.529 "mp_policy": "active_passive" 00:29:43.529 } 00:29:43.529 } 00:29:43.529 ] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4DmasXmlTh 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4DmasXmlTh 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.4DmasXmlTh 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 [2024-12-06 16:56:32.202835] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:43.529 [2024-12-06 16:56:32.202953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.529 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.529 [2024-12-06 16:56:32.218912] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:43.788 nvme0n1 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.789 [ 00:29:43.789 { 00:29:43.789 "name": "nvme0n1", 00:29:43.789 "aliases": [ 00:29:43.789 "96e1953b-650a-4369-a0ab-abb70eccbb50" 00:29:43.789 ], 00:29:43.789 "product_name": "NVMe disk", 00:29:43.789 "block_size": 512, 00:29:43.789 "num_blocks": 2097152, 00:29:43.789 "uuid": "96e1953b-650a-4369-a0ab-abb70eccbb50", 00:29:43.789 "numa_id": 0, 00:29:43.789 "assigned_rate_limits": { 00:29:43.789 "rw_ios_per_sec": 0, 00:29:43.789 "rw_mbytes_per_sec": 0, 00:29:43.789 "r_mbytes_per_sec": 0, 00:29:43.789 "w_mbytes_per_sec": 0 00:29:43.789 }, 00:29:43.789 "claimed": false, 00:29:43.789 "zoned": false, 00:29:43.789 "supported_io_types": { 00:29:43.789 "read": true, 00:29:43.789 "write": true, 00:29:43.789 "unmap": false, 00:29:43.789 "flush": true, 00:29:43.789 "reset": true, 00:29:43.789 "nvme_admin": true, 00:29:43.789 "nvme_io": true, 00:29:43.789 "nvme_io_md": false, 00:29:43.789 "write_zeroes": true, 00:29:43.789 "zcopy": false, 00:29:43.789 "get_zone_info": false, 00:29:43.789 "zone_management": false, 00:29:43.789 "zone_append": false, 00:29:43.789 "compare": true, 00:29:43.789 "compare_and_write": true, 00:29:43.789 "abort": true, 00:29:43.789 "seek_hole": false, 00:29:43.789 "seek_data": false, 00:29:43.789 "copy": true, 00:29:43.789 "nvme_iov_md": false 00:29:43.789 }, 00:29:43.789 "memory_domains": [ 00:29:43.789 { 00:29:43.789 "dma_device_id": "system", 00:29:43.789 "dma_device_type": 1 00:29:43.789 } 00:29:43.789 ], 00:29:43.789 "driver_specific": { 00:29:43.789 "nvme": [ 00:29:43.789 { 00:29:43.789 "trid": { 00:29:43.789 "trtype": "TCP", 00:29:43.789 "adrfam": "IPv4", 00:29:43.789 "traddr": "10.0.0.2", 00:29:43.789 "trsvcid": "4421", 00:29:43.789 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:43.789 }, 00:29:43.789 "ctrlr_data": { 00:29:43.789 "cntlid": 3, 00:29:43.789 "vendor_id": "0x8086", 00:29:43.789 "model_number": "SPDK bdev Controller", 00:29:43.789 "serial_number": "00000000000000000000", 00:29:43.789 "firmware_revision": "25.01", 00:29:43.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:43.789 "oacs": { 00:29:43.789 "security": 0, 00:29:43.789 "format": 0, 00:29:43.789 "firmware": 0, 00:29:43.789 "ns_manage": 0 00:29:43.789 }, 00:29:43.789 "multi_ctrlr": true, 00:29:43.789 "ana_reporting": false 00:29:43.789 }, 00:29:43.789 "vs": { 00:29:43.789 "nvme_version": "1.3" 00:29:43.789 }, 00:29:43.789 "ns_data": { 00:29:43.789 "id": 1, 00:29:43.789 "can_share": true 00:29:43.789 } 00:29:43.789 } 00:29:43.789 ], 00:29:43.789 "mp_policy": "active_passive" 00:29:43.789 } 00:29:43.789 } 00:29:43.789 ] 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.4DmasXmlTh 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.789 rmmod nvme_tcp 00:29:43.789 rmmod nvme_fabrics 00:29:43.789 rmmod nvme_keyring 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2396513 ']' 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2396513 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2396513 ']' 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2396513 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396513 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396513' 00:29:43.789 killing process with pid 2396513 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2396513 00:29:43.789 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2396513 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:44.048 16:56:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.953 00:29:45.953 real 0m9.331s 00:29:45.953 user 0m3.278s 00:29:45.953 sys 0m4.414s 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:45.953 ************************************ 00:29:45.953 END TEST nvmf_async_init 00:29:45.953 ************************************ 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.953 ************************************ 00:29:45.953 START TEST dma 00:29:45.953 ************************************ 00:29:45.953 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:46.214 * Looking for test storage... 00:29:46.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:46.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.214 --rc genhtml_branch_coverage=1 00:29:46.214 --rc genhtml_function_coverage=1 00:29:46.214 --rc genhtml_legend=1 00:29:46.214 --rc geninfo_all_blocks=1 00:29:46.214 --rc geninfo_unexecuted_blocks=1 00:29:46.214 00:29:46.214 ' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:46.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.214 --rc genhtml_branch_coverage=1 00:29:46.214 --rc genhtml_function_coverage=1 00:29:46.214 --rc genhtml_legend=1 00:29:46.214 --rc geninfo_all_blocks=1 00:29:46.214 --rc geninfo_unexecuted_blocks=1 00:29:46.214 00:29:46.214 ' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:46.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.214 --rc genhtml_branch_coverage=1 00:29:46.214 --rc genhtml_function_coverage=1 00:29:46.214 --rc genhtml_legend=1 00:29:46.214 --rc geninfo_all_blocks=1 00:29:46.214 --rc geninfo_unexecuted_blocks=1 00:29:46.214 00:29:46.214 ' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:46.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.214 --rc genhtml_branch_coverage=1 00:29:46.214 --rc genhtml_function_coverage=1 00:29:46.214 --rc genhtml_legend=1 00:29:46.214 --rc geninfo_all_blocks=1 00:29:46.214 --rc geninfo_unexecuted_blocks=1 00:29:46.214 00:29:46.214 ' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:46.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.214 16:56:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:46.215 00:29:46.215 real 0m0.138s 00:29:46.215 user 0m0.083s 00:29:46.215 sys 0m0.061s 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:46.215 ************************************ 00:29:46.215 END TEST dma 00:29:46.215 ************************************ 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:46.215 ************************************ 00:29:46.215 START TEST nvmf_identify 00:29:46.215 ************************************ 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:46.215 * Looking for test storage... 00:29:46.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:46.215 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:46.475 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:46.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.476 --rc genhtml_branch_coverage=1 00:29:46.476 --rc genhtml_function_coverage=1 00:29:46.476 --rc genhtml_legend=1 00:29:46.476 --rc geninfo_all_blocks=1 00:29:46.476 --rc geninfo_unexecuted_blocks=1 00:29:46.476 00:29:46.476 ' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:46.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.476 --rc genhtml_branch_coverage=1 00:29:46.476 --rc genhtml_function_coverage=1 00:29:46.476 --rc genhtml_legend=1 00:29:46.476 --rc geninfo_all_blocks=1 00:29:46.476 --rc geninfo_unexecuted_blocks=1 00:29:46.476 00:29:46.476 ' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:46.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.476 --rc genhtml_branch_coverage=1 00:29:46.476 --rc genhtml_function_coverage=1 00:29:46.476 --rc genhtml_legend=1 00:29:46.476 --rc geninfo_all_blocks=1 00:29:46.476 --rc geninfo_unexecuted_blocks=1 00:29:46.476 00:29:46.476 ' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:46.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.476 --rc genhtml_branch_coverage=1 00:29:46.476 --rc genhtml_function_coverage=1 00:29:46.476 --rc genhtml_legend=1 00:29:46.476 --rc geninfo_all_blocks=1 00:29:46.476 --rc geninfo_unexecuted_blocks=1 00:29:46.476 00:29:46.476 ' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:46.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.476 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.477 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.477 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.477 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.477 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.477 16:56:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:51.751 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:51.752 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:51.752 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:51.752 Found net devices under 0000:31:00.0: cvl_0_0 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:51.752 Found net devices under 0000:31:00.1: cvl_0_1 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:51.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:29:51.752 00:29:51.752 --- 10.0.0.2 ping statistics --- 00:29:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.752 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:29:51.752 00:29:51.752 --- 10.0.0.1 ping statistics --- 00:29:51.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.752 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2401255 00:29:51.752 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2401255 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2401255 ']' 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.753 16:56:40 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:51.753 [2024-12-06 16:56:40.428651] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:51.753 [2024-12-06 16:56:40.428699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.012 [2024-12-06 16:56:40.512448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.012 [2024-12-06 16:56:40.531941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.012 [2024-12-06 16:56:40.531975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.012 [2024-12-06 16:56:40.531985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.012 [2024-12-06 16:56:40.531991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.012 [2024-12-06 16:56:40.532000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.012 [2024-12-06 16:56:40.533519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.012 [2024-12-06 16:56:40.533641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.012 [2024-12-06 16:56:40.533792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.012 [2024-12-06 16:56:40.533793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.582 [2024-12-06 16:56:41.210364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.582 Malloc0 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.582 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.844 [2024-12-06 16:56:41.289867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:52.844 [ 00:29:52.844 { 00:29:52.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:52.844 "subtype": "Discovery", 00:29:52.844 "listen_addresses": [ 00:29:52.844 { 00:29:52.844 "trtype": "TCP", 00:29:52.844 "adrfam": "IPv4", 00:29:52.844 "traddr": "10.0.0.2", 00:29:52.844 "trsvcid": "4420" 00:29:52.844 } 00:29:52.844 ], 00:29:52.844 "allow_any_host": true, 00:29:52.844 "hosts": [] 00:29:52.844 }, 00:29:52.844 { 00:29:52.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.844 "subtype": "NVMe", 00:29:52.844 "listen_addresses": [ 00:29:52.844 { 00:29:52.844 "trtype": "TCP", 00:29:52.844 "adrfam": "IPv4", 00:29:52.844 "traddr": "10.0.0.2", 00:29:52.844 "trsvcid": "4420" 00:29:52.844 } 00:29:52.844 ], 00:29:52.844 "allow_any_host": true, 00:29:52.844 "hosts": [], 00:29:52.844 "serial_number": "SPDK00000000000001", 00:29:52.844 "model_number": "SPDK bdev Controller", 00:29:52.844 "max_namespaces": 32, 00:29:52.844 "min_cntlid": 1, 00:29:52.844 "max_cntlid": 65519, 00:29:52.844 "namespaces": [ 00:29:52.844 { 00:29:52.844 "nsid": 1, 00:29:52.844 "bdev_name": "Malloc0", 00:29:52.844 "name": "Malloc0", 00:29:52.844 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:52.844 "eui64": "ABCDEF0123456789", 00:29:52.844 "uuid": "5351d374-6845-4b17-8978-131e650d14f8" 00:29:52.844 } 00:29:52.844 ] 00:29:52.844 } 00:29:52.844 ] 00:29:52.844 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.845 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:52.845 [2024-12-06 16:56:41.326462] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:52.845 [2024-12-06 16:56:41.326492] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401340 ] 00:29:52.845 [2024-12-06 16:56:41.375878] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:52.845 [2024-12-06 16:56:41.375935] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:52.845 [2024-12-06 16:56:41.375941] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:52.845 [2024-12-06 16:56:41.375956] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:52.845 [2024-12-06 16:56:41.375966] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:52.845 [2024-12-06 16:56:41.379379] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:52.845 [2024-12-06 16:56:41.379415] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1063de0 0 00:29:52.845 [2024-12-06 16:56:41.387116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:52.845 [2024-12-06 16:56:41.387128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:52.845 [2024-12-06 16:56:41.387133] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:52.845 [2024-12-06 16:56:41.387136] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:52.845 [2024-12-06 16:56:41.387166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.387172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.387177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.845 [2024-12-06 16:56:41.387190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:52.845 [2024-12-06 16:56:41.387208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.845 [2024-12-06 16:56:41.395110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.845 [2024-12-06 16:56:41.395120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.845 [2024-12-06 16:56:41.395124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.845 [2024-12-06 16:56:41.395139] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:52.845 [2024-12-06 16:56:41.395147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:52.845 [2024-12-06 16:56:41.395152] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:52.845 [2024-12-06 16:56:41.395170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.845 [2024-12-06 16:56:41.395186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.845 [2024-12-06 16:56:41.395200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.845 [2024-12-06 16:56:41.395414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.845 [2024-12-06 16:56:41.395420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.845 [2024-12-06 16:56:41.395424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.845 [2024-12-06 16:56:41.395433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:52.845 [2024-12-06 16:56:41.395441] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:52.845 [2024-12-06 16:56:41.395447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.845 [2024-12-06 16:56:41.395462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.845 [2024-12-06 16:56:41.395472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.845 [2024-12-06 16:56:41.395634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.845 [2024-12-06 16:56:41.395640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.845 [2024-12-06 16:56:41.395644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.845 [2024-12-06 16:56:41.395654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:52.845 [2024-12-06 16:56:41.395662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:52.845 [2024-12-06 16:56:41.395669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.845 [2024-12-06 16:56:41.395683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.845 [2024-12-06 16:56:41.395693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.845 [2024-12-06 16:56:41.395871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.845 [2024-12-06 16:56:41.395877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.845 [2024-12-06 16:56:41.395881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.845 [2024-12-06 16:56:41.395890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:52.845 [2024-12-06 16:56:41.395899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.395909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.845 [2024-12-06 16:56:41.395916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.845 [2024-12-06 16:56:41.395926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.845 [2024-12-06 16:56:41.396091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.845 [2024-12-06 16:56:41.396098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.845 [2024-12-06 16:56:41.396106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.396110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.845 [2024-12-06 16:56:41.396115] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:52.845 [2024-12-06 16:56:41.396120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:52.845 [2024-12-06 16:56:41.396128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:52.845 [2024-12-06 16:56:41.396238] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:52.845 [2024-12-06 16:56:41.396244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:52.845 [2024-12-06 16:56:41.396252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.396256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.396260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.845 [2024-12-06 16:56:41.396266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.845 [2024-12-06 16:56:41.396277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.845 [2024-12-06 16:56:41.396462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.845 [2024-12-06 16:56:41.396469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.845 [2024-12-06 16:56:41.396472] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.396476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.845 [2024-12-06 16:56:41.396481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:52.845 [2024-12-06 16:56:41.396490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.396494] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.396498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.845 [2024-12-06 16:56:41.396505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.845 [2024-12-06 16:56:41.396515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.845 [2024-12-06 16:56:41.396680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.845 [2024-12-06 16:56:41.396686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.845 [2024-12-06 16:56:41.396690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.845 [2024-12-06 16:56:41.396694] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.845 [2024-12-06 16:56:41.396698] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:52.845 [2024-12-06 16:56:41.396705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:52.845 [2024-12-06 16:56:41.396714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:52.845 [2024-12-06 16:56:41.396721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:52.845 [2024-12-06 16:56:41.396730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.396734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.396741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.846 [2024-12-06 16:56:41.396751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.846 [2024-12-06 16:56:41.396955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.846 [2024-12-06 16:56:41.396962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.846 [2024-12-06 16:56:41.396966] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.396970] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1063de0): datao=0, datal=4096, cccid=0 00:29:52.846 [2024-12-06 16:56:41.396975] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10bef40) on tqpair(0x1063de0): expected_datao=0, payload_size=4096 00:29:52.846 [2024-12-06 16:56:41.396980] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.396987] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.396992] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397190] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.846 [2024-12-06 16:56:41.397196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.846 [2024-12-06 16:56:41.397200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.846 [2024-12-06 16:56:41.397212] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:52.846 [2024-12-06 16:56:41.397219] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:52.846 [2024-12-06 16:56:41.397224] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:52.846 [2024-12-06 16:56:41.397229] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:52.846 [2024-12-06 16:56:41.397234] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:52.846 [2024-12-06 16:56:41.397239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:52.846 [2024-12-06 16:56:41.397247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:52.846 [2024-12-06 16:56:41.397254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.397268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:52.846 [2024-12-06 16:56:41.397279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.846 [2024-12-06 16:56:41.397478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.846 [2024-12-06 16:56:41.397486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.846 [2024-12-06 16:56:41.397490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.846 [2024-12-06 16:56:41.397502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.397515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.846 [2024-12-06 16:56:41.397522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.397535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.846 [2024-12-06 16:56:41.397541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.397554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.846 [2024-12-06 16:56:41.397561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.397574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.846 [2024-12-06 16:56:41.397579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:52.846 [2024-12-06 16:56:41.397589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:52.846 [2024-12-06 16:56:41.397595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.397606] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.846 [2024-12-06 16:56:41.397618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bef40, cid 0, qid 0 00:29:52.846 [2024-12-06 16:56:41.397623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf0c0, cid 1, qid 0 00:29:52.846 [2024-12-06 16:56:41.397628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf240, cid 2, qid 0 00:29:52.846 [2024-12-06 16:56:41.397633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.846 [2024-12-06 16:56:41.397637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf540, cid 4, qid 0 00:29:52.846 [2024-12-06 16:56:41.397846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.846 [2024-12-06 16:56:41.397853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.846 [2024-12-06 16:56:41.397856] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf540) on tqpair=0x1063de0 00:29:52.846 [2024-12-06 16:56:41.397866] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:52.846 [2024-12-06 16:56:41.397873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:52.846 [2024-12-06 16:56:41.397883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.397887] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.397894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.846 [2024-12-06 16:56:41.397903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf540, cid 4, qid 0 00:29:52.846 [2024-12-06 16:56:41.398122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.846 [2024-12-06 16:56:41.398130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.846 [2024-12-06 16:56:41.398133] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.398137] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1063de0): datao=0, datal=4096, cccid=4 00:29:52.846 [2024-12-06 16:56:41.398142] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10bf540) on tqpair(0x1063de0): expected_datao=0, payload_size=4096 00:29:52.846 [2024-12-06 16:56:41.398146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.398159] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.398163] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.846 [2024-12-06 16:56:41.443117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.846 [2024-12-06 16:56:41.443121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf540) on tqpair=0x1063de0 00:29:52.846 [2024-12-06 16:56:41.443137] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:52.846 [2024-12-06 16:56:41.443160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.443171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.846 [2024-12-06 16:56:41.443178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1063de0) 00:29:52.846 [2024-12-06 16:56:41.443192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:52.846 [2024-12-06 16:56:41.443207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf540, cid 4, qid 0 00:29:52.846 [2024-12-06 16:56:41.443213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf6c0, cid 5, qid 0 00:29:52.846 [2024-12-06 16:56:41.443428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.846 [2024-12-06 16:56:41.443435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.846 [2024-12-06 16:56:41.443438] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443442] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1063de0): datao=0, datal=1024, cccid=4 00:29:52.846 [2024-12-06 16:56:41.443447] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10bf540) on tqpair(0x1063de0): expected_datao=0, payload_size=1024 00:29:52.846 [2024-12-06 16:56:41.443451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443458] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443464] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.846 [2024-12-06 16:56:41.443470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.846 [2024-12-06 16:56:41.443476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.846 [2024-12-06 16:56:41.443480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.443484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf6c0) on tqpair=0x1063de0 00:29:52.847 [2024-12-06 16:56:41.485282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.847 [2024-12-06 16:56:41.485295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.847 [2024-12-06 16:56:41.485298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf540) on tqpair=0x1063de0 00:29:52.847 [2024-12-06 16:56:41.485316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1063de0) 00:29:52.847 [2024-12-06 16:56:41.485328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.847 [2024-12-06 16:56:41.485344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf540, cid 4, qid 0 00:29:52.847 [2024-12-06 16:56:41.485557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.847 [2024-12-06 16:56:41.485564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.847 [2024-12-06 16:56:41.485567] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485571] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1063de0): datao=0, datal=3072, cccid=4 00:29:52.847 [2024-12-06 16:56:41.485576] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10bf540) on tqpair(0x1063de0): expected_datao=0, payload_size=3072 00:29:52.847 [2024-12-06 16:56:41.485581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485597] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485602] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.847 [2024-12-06 16:56:41.485744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.847 [2024-12-06 16:56:41.485748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf540) on tqpair=0x1063de0 00:29:52.847 [2024-12-06 16:56:41.485760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.485763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1063de0) 00:29:52.847 [2024-12-06 16:56:41.485770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.847 [2024-12-06 16:56:41.485784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf540, cid 4, qid 0 00:29:52.847 [2024-12-06 16:56:41.486006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:52.847 [2024-12-06 16:56:41.486012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:52.847 [2024-12-06 16:56:41.486016] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.486020] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1063de0): datao=0, datal=8, cccid=4 00:29:52.847 [2024-12-06 16:56:41.486024] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10bf540) on tqpair(0x1063de0): expected_datao=0, payload_size=8 00:29:52.847 [2024-12-06 16:56:41.486029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.486035] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.486039] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.531111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.847 [2024-12-06 16:56:41.531120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.847 [2024-12-06 16:56:41.531124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.847 [2024-12-06 16:56:41.531128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf540) on tqpair=0x1063de0 00:29:52.847 ===================================================== 00:29:52.847 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:52.847 ===================================================== 00:29:52.847 Controller Capabilities/Features 00:29:52.847 ================================ 00:29:52.847 Vendor ID: 0000 00:29:52.847 Subsystem Vendor ID: 0000 00:29:52.847 Serial Number: .................... 00:29:52.847 Model Number: ........................................ 00:29:52.847 Firmware Version: 25.01 00:29:52.847 Recommended Arb Burst: 0 00:29:52.847 IEEE OUI Identifier: 00 00 00 00:29:52.847 Multi-path I/O 00:29:52.847 May have multiple subsystem ports: No 00:29:52.847 May have multiple controllers: No 00:29:52.847 Associated with SR-IOV VF: No 00:29:52.847 Max Data Transfer Size: 131072 00:29:52.847 Max Number of Namespaces: 0 00:29:52.847 Max Number of I/O Queues: 1024 00:29:52.847 NVMe Specification Version (VS): 1.3 00:29:52.847 NVMe Specification Version (Identify): 1.3 00:29:52.847 Maximum Queue Entries: 128 00:29:52.847 Contiguous Queues Required: Yes 00:29:52.847 Arbitration Mechanisms Supported 00:29:52.847 Weighted Round Robin: Not Supported 00:29:52.847 Vendor Specific: Not Supported 00:29:52.847 Reset Timeout: 15000 ms 00:29:52.847 Doorbell Stride: 4 bytes 00:29:52.847 NVM Subsystem Reset: Not Supported 00:29:52.847 Command Sets Supported 00:29:52.847 NVM Command Set: Supported 00:29:52.847 Boot Partition: Not Supported 00:29:52.847 Memory Page Size Minimum: 4096 bytes 00:29:52.847 Memory Page Size Maximum: 4096 bytes 00:29:52.847 Persistent Memory Region: Not Supported 00:29:52.847 Optional Asynchronous Events Supported 00:29:52.847 Namespace Attribute Notices: Not Supported 00:29:52.847 Firmware Activation Notices: Not Supported 00:29:52.847 ANA Change Notices: Not Supported 00:29:52.847 PLE Aggregate Log Change Notices: Not Supported 00:29:52.847 LBA Status Info Alert Notices: Not Supported 00:29:52.847 EGE Aggregate Log Change Notices: Not Supported 00:29:52.847 Normal NVM Subsystem Shutdown event: Not Supported 00:29:52.847 Zone Descriptor Change Notices: Not Supported 00:29:52.847 Discovery Log Change Notices: Supported 00:29:52.847 Controller Attributes 00:29:52.847 128-bit Host Identifier: Not Supported 00:29:52.847 Non-Operational Permissive Mode: Not Supported 00:29:52.847 NVM Sets: Not Supported 00:29:52.847 Read Recovery Levels: Not Supported 00:29:52.847 Endurance Groups: Not Supported 00:29:52.847 Predictable Latency Mode: Not Supported 00:29:52.847 Traffic Based Keep ALive: Not Supported 00:29:52.847 Namespace Granularity: Not Supported 00:29:52.847 SQ Associations: Not Supported 00:29:52.847 UUID List: Not Supported 00:29:52.847 Multi-Domain Subsystem: Not Supported 00:29:52.847 Fixed Capacity Management: Not Supported 00:29:52.847 Variable Capacity Management: Not Supported 00:29:52.847 Delete Endurance Group: Not Supported 00:29:52.847 Delete NVM Set: Not Supported 00:29:52.847 Extended LBA Formats Supported: Not Supported 00:29:52.847 Flexible Data Placement Supported: Not Supported 00:29:52.847 00:29:52.847 Controller Memory Buffer Support 00:29:52.847 ================================ 00:29:52.847 Supported: No 00:29:52.847 00:29:52.847 Persistent Memory Region Support 00:29:52.847 ================================ 00:29:52.847 Supported: No 00:29:52.847 00:29:52.847 Admin Command Set Attributes 00:29:52.847 ============================ 00:29:52.847 Security Send/Receive: Not Supported 00:29:52.847 Format NVM: Not Supported 00:29:52.847 Firmware Activate/Download: Not Supported 00:29:52.847 Namespace Management: Not Supported 00:29:52.847 Device Self-Test: Not Supported 00:29:52.847 Directives: Not Supported 00:29:52.847 NVMe-MI: Not Supported 00:29:52.847 Virtualization Management: Not Supported 00:29:52.847 Doorbell Buffer Config: Not Supported 00:29:52.847 Get LBA Status Capability: Not Supported 00:29:52.847 Command & Feature Lockdown Capability: Not Supported 00:29:52.847 Abort Command Limit: 1 00:29:52.847 Async Event Request Limit: 4 00:29:52.847 Number of Firmware Slots: N/A 00:29:52.847 Firmware Slot 1 Read-Only: N/A 00:29:52.847 Firmware Activation Without Reset: N/A 00:29:52.847 Multiple Update Detection Support: N/A 00:29:52.847 Firmware Update Granularity: No Information Provided 00:29:52.847 Per-Namespace SMART Log: No 00:29:52.847 Asymmetric Namespace Access Log Page: Not Supported 00:29:52.847 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:52.847 Command Effects Log Page: Not Supported 00:29:52.847 Get Log Page Extended Data: Supported 00:29:52.847 Telemetry Log Pages: Not Supported 00:29:52.847 Persistent Event Log Pages: Not Supported 00:29:52.847 Supported Log Pages Log Page: May Support 00:29:52.847 Commands Supported & Effects Log Page: Not Supported 00:29:52.847 Feature Identifiers & Effects Log Page:May Support 00:29:52.847 NVMe-MI Commands & Effects Log Page: May Support 00:29:52.847 Data Area 4 for Telemetry Log: Not Supported 00:29:52.847 Error Log Page Entries Supported: 128 00:29:52.847 Keep Alive: Not Supported 00:29:52.847 00:29:52.847 NVM Command Set Attributes 00:29:52.847 ========================== 00:29:52.847 Submission Queue Entry Size 00:29:52.847 Max: 1 00:29:52.847 Min: 1 00:29:52.847 Completion Queue Entry Size 00:29:52.847 Max: 1 00:29:52.847 Min: 1 00:29:52.847 Number of Namespaces: 0 00:29:52.847 Compare Command: Not Supported 00:29:52.847 Write Uncorrectable Command: Not Supported 00:29:52.847 Dataset Management Command: Not Supported 00:29:52.847 Write Zeroes Command: Not Supported 00:29:52.847 Set Features Save Field: Not Supported 00:29:52.847 Reservations: Not Supported 00:29:52.847 Timestamp: Not Supported 00:29:52.847 Copy: Not Supported 00:29:52.848 Volatile Write Cache: Not Present 00:29:52.848 Atomic Write Unit (Normal): 1 00:29:52.848 Atomic Write Unit (PFail): 1 00:29:52.848 Atomic Compare & Write Unit: 1 00:29:52.848 Fused Compare & Write: Supported 00:29:52.848 Scatter-Gather List 00:29:52.848 SGL Command Set: Supported 00:29:52.848 SGL Keyed: Supported 00:29:52.848 SGL Bit Bucket Descriptor: Not Supported 00:29:52.848 SGL Metadata Pointer: Not Supported 00:29:52.848 Oversized SGL: Not Supported 00:29:52.848 SGL Metadata Address: Not Supported 00:29:52.848 SGL Offset: Supported 00:29:52.848 Transport SGL Data Block: Not Supported 00:29:52.848 Replay Protected Memory Block: Not Supported 00:29:52.848 00:29:52.848 Firmware Slot Information 00:29:52.848 ========================= 00:29:52.848 Active slot: 0 00:29:52.848 00:29:52.848 00:29:52.848 Error Log 00:29:52.848 ========= 00:29:52.848 00:29:52.848 Active Namespaces 00:29:52.848 ================= 00:29:52.848 Discovery Log Page 00:29:52.848 ================== 00:29:52.848 Generation Counter: 2 00:29:52.848 Number of Records: 2 00:29:52.848 Record Format: 0 00:29:52.848 00:29:52.848 Discovery Log Entry 0 00:29:52.848 ---------------------- 00:29:52.848 Transport Type: 3 (TCP) 00:29:52.848 Address Family: 1 (IPv4) 00:29:52.848 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:52.848 Entry Flags: 00:29:52.848 Duplicate Returned Information: 1 00:29:52.848 Explicit Persistent Connection Support for Discovery: 1 00:29:52.848 Transport Requirements: 00:29:52.848 Secure Channel: Not Required 00:29:52.848 Port ID: 0 (0x0000) 00:29:52.848 Controller ID: 65535 (0xffff) 00:29:52.848 Admin Max SQ Size: 128 00:29:52.848 Transport Service Identifier: 4420 00:29:52.848 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:52.848 Transport Address: 10.0.0.2 00:29:52.848 Discovery Log Entry 1 00:29:52.848 ---------------------- 00:29:52.848 Transport Type: 3 (TCP) 00:29:52.848 Address Family: 1 (IPv4) 00:29:52.848 Subsystem Type: 2 (NVM Subsystem) 00:29:52.848 Entry Flags: 00:29:52.848 Duplicate Returned Information: 0 00:29:52.848 Explicit Persistent Connection Support for Discovery: 0 00:29:52.848 Transport Requirements: 00:29:52.848 Secure Channel: Not Required 00:29:52.848 Port ID: 0 (0x0000) 00:29:52.848 Controller ID: 65535 (0xffff) 00:29:52.848 Admin Max SQ Size: 128 00:29:52.848 Transport Service Identifier: 4420 00:29:52.848 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:52.848 Transport Address: 10.0.0.2 [2024-12-06 16:56:41.531219] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:52.848 [2024-12-06 16:56:41.531230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bef40) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.531238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.848 [2024-12-06 16:56:41.531243] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf0c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.531248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.848 [2024-12-06 16:56:41.531253] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf240) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.531258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.848 [2024-12-06 16:56:41.531263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.531268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:52.848 [2024-12-06 16:56:41.531278] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.848 [2024-12-06 16:56:41.531293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.848 [2024-12-06 16:56:41.531308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.848 [2024-12-06 16:56:41.531543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.848 [2024-12-06 16:56:41.531549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.848 [2024-12-06 16:56:41.531553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.531564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.848 [2024-12-06 16:56:41.531578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.848 [2024-12-06 16:56:41.531592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.848 [2024-12-06 16:56:41.531776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.848 [2024-12-06 16:56:41.531782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.848 [2024-12-06 16:56:41.531786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.531795] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:52.848 [2024-12-06 16:56:41.531799] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:52.848 [2024-12-06 16:56:41.531809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.531819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.848 [2024-12-06 16:56:41.531825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.848 [2024-12-06 16:56:41.531836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.848 [2024-12-06 16:56:41.532027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.848 [2024-12-06 16:56:41.532033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.848 [2024-12-06 16:56:41.532037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.532051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.848 [2024-12-06 16:56:41.532065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.848 [2024-12-06 16:56:41.532075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.848 [2024-12-06 16:56:41.532255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.848 [2024-12-06 16:56:41.532263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.848 [2024-12-06 16:56:41.532266] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.532280] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.848 [2024-12-06 16:56:41.532294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.848 [2024-12-06 16:56:41.532305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.848 [2024-12-06 16:56:41.532489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.848 [2024-12-06 16:56:41.532496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.848 [2024-12-06 16:56:41.532499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.532513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.848 [2024-12-06 16:56:41.532527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.848 [2024-12-06 16:56:41.532537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.848 [2024-12-06 16:56:41.532716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.848 [2024-12-06 16:56:41.532722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.848 [2024-12-06 16:56:41.532726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.848 [2024-12-06 16:56:41.532739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.848 [2024-12-06 16:56:41.532749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.848 [2024-12-06 16:56:41.532756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.848 [2024-12-06 16:56:41.532766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.849 [2024-12-06 16:56:41.532935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.849 [2024-12-06 16:56:41.532941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.849 [2024-12-06 16:56:41.532945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.532949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.849 [2024-12-06 16:56:41.532959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.532963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.532966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.849 [2024-12-06 16:56:41.532973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.849 [2024-12-06 16:56:41.532983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.849 [2024-12-06 16:56:41.533169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.849 [2024-12-06 16:56:41.533176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.849 [2024-12-06 16:56:41.533179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.849 [2024-12-06 16:56:41.533193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.849 [2024-12-06 16:56:41.533207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.849 [2024-12-06 16:56:41.533218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.849 [2024-12-06 16:56:41.533412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.849 [2024-12-06 16:56:41.533418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.849 [2024-12-06 16:56:41.533422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.849 [2024-12-06 16:56:41.533435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.849 [2024-12-06 16:56:41.533450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.849 [2024-12-06 16:56:41.533459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:52.849 [2024-12-06 16:56:41.533686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:52.849 [2024-12-06 16:56:41.533692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:52.849 [2024-12-06 16:56:41.533695] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:52.849 [2024-12-06 16:56:41.533709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:52.849 [2024-12-06 16:56:41.533717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:52.849 [2024-12-06 16:56:41.533725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:52.849 [2024-12-06 16:56:41.533736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:53.112 [2024-12-06 16:56:41.533915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.112 [2024-12-06 16:56:41.533923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.112 [2024-12-06 16:56:41.533928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.533934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:53.112 [2024-12-06 16:56:41.533944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.533948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.533953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:53.112 [2024-12-06 16:56:41.533960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.112 [2024-12-06 16:56:41.533971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:53.112 [2024-12-06 16:56:41.534142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.112 [2024-12-06 16:56:41.534149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.112 [2024-12-06 16:56:41.534152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:53.112 [2024-12-06 16:56:41.534166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534170] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:53.112 [2024-12-06 16:56:41.534181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.112 [2024-12-06 16:56:41.534191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:53.112 [2024-12-06 16:56:41.534368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.112 [2024-12-06 16:56:41.534374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.112 [2024-12-06 16:56:41.534377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:53.112 [2024-12-06 16:56:41.534391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:53.112 [2024-12-06 16:56:41.534405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.112 [2024-12-06 16:56:41.534415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:53.112 [2024-12-06 16:56:41.534626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.112 [2024-12-06 16:56:41.534633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.112 [2024-12-06 16:56:41.534637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:53.112 [2024-12-06 16:56:41.534650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:53.112 [2024-12-06 16:56:41.534664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.112 [2024-12-06 16:56:41.534677] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:53.112 [2024-12-06 16:56:41.534879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.112 [2024-12-06 16:56:41.534885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.112 [2024-12-06 16:56:41.534888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:53.112 [2024-12-06 16:56:41.534902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.534910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:53.112 [2024-12-06 16:56:41.534916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.112 [2024-12-06 16:56:41.534926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:53.112 [2024-12-06 16:56:41.535098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.112 [2024-12-06 16:56:41.539112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.112 [2024-12-06 16:56:41.539116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.539120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:53.112 [2024-12-06 16:56:41.539131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.539135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.539138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1063de0) 00:29:53.112 [2024-12-06 16:56:41.539145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.112 [2024-12-06 16:56:41.539156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10bf3c0, cid 3, qid 0 00:29:53.112 [2024-12-06 16:56:41.539324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.112 [2024-12-06 16:56:41.539330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.112 [2024-12-06 16:56:41.539334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.539338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x10bf3c0) on tqpair=0x1063de0 00:29:53.112 [2024-12-06 16:56:41.539345] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:29:53.112 00:29:53.112 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:53.112 [2024-12-06 16:56:41.569237] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:29:53.112 [2024-12-06 16:56:41.569268] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401497 ] 00:29:53.112 [2024-12-06 16:56:41.621353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:53.112 [2024-12-06 16:56:41.621420] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:53.112 [2024-12-06 16:56:41.621425] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:53.112 [2024-12-06 16:56:41.621441] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:53.112 [2024-12-06 16:56:41.621455] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:53.112 [2024-12-06 16:56:41.625392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:53.112 [2024-12-06 16:56:41.625433] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16adde0 0 00:29:53.112 [2024-12-06 16:56:41.633125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:53.112 [2024-12-06 16:56:41.633141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:53.112 [2024-12-06 16:56:41.633145] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:53.112 [2024-12-06 16:56:41.633149] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:53.112 [2024-12-06 16:56:41.633181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.633188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.112 [2024-12-06 16:56:41.633192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.112 [2024-12-06 16:56:41.633205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:53.113 [2024-12-06 16:56:41.633227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.639118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.639129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.639132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.639151] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:53.113 [2024-12-06 16:56:41.639158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:53.113 [2024-12-06 16:56:41.639163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:53.113 [2024-12-06 16:56:41.639178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.113 [2024-12-06 16:56:41.639195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.113 [2024-12-06 16:56:41.639211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.639423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.639430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.639433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.639443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:53.113 [2024-12-06 16:56:41.639450] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:53.113 [2024-12-06 16:56:41.639457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.113 [2024-12-06 16:56:41.639471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.113 [2024-12-06 16:56:41.639482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.639671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.639679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.639683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.639692] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:53.113 [2024-12-06 16:56:41.639701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:53.113 [2024-12-06 16:56:41.639708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.113 [2024-12-06 16:56:41.639722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.113 [2024-12-06 16:56:41.639733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.639952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.639960] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.639963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639967] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.639972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:53.113 [2024-12-06 16:56:41.639982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.639990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.113 [2024-12-06 16:56:41.639996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.113 [2024-12-06 16:56:41.640007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.640225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.640232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.640236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.640244] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:53.113 [2024-12-06 16:56:41.640249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:53.113 [2024-12-06 16:56:41.640257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:53.113 [2024-12-06 16:56:41.640366] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:53.113 [2024-12-06 16:56:41.640371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:53.113 [2024-12-06 16:56:41.640379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.113 [2024-12-06 16:56:41.640395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.113 [2024-12-06 16:56:41.640407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.640613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.640619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.640623] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.640632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:53.113 [2024-12-06 16:56:41.640641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.113 [2024-12-06 16:56:41.640655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.113 [2024-12-06 16:56:41.640665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.640871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.640877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.640881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.640889] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:53.113 [2024-12-06 16:56:41.640894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:53.113 [2024-12-06 16:56:41.640902] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:53.113 [2024-12-06 16:56:41.640915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:53.113 [2024-12-06 16:56:41.640924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.640928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.113 [2024-12-06 16:56:41.640935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.113 [2024-12-06 16:56:41.640946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.113 [2024-12-06 16:56:41.641196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.113 [2024-12-06 16:56:41.641203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.113 [2024-12-06 16:56:41.641207] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.641211] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=4096, cccid=0 00:29:53.113 [2024-12-06 16:56:41.641216] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1708f40) on tqpair(0x16adde0): expected_datao=0, payload_size=4096 00:29:53.113 [2024-12-06 16:56:41.641220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.641235] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.641240] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.682249] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.113 [2024-12-06 16:56:41.682260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.113 [2024-12-06 16:56:41.682271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.113 [2024-12-06 16:56:41.682275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.113 [2024-12-06 16:56:41.682285] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:53.113 [2024-12-06 16:56:41.682293] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:53.113 [2024-12-06 16:56:41.682298] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:53.113 [2024-12-06 16:56:41.682302] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:53.113 [2024-12-06 16:56:41.682307] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:53.113 [2024-12-06 16:56:41.682312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.682321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.682328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.682345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.114 [2024-12-06 16:56:41.682358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.114 [2024-12-06 16:56:41.682555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.114 [2024-12-06 16:56:41.682561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.114 [2024-12-06 16:56:41.682565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.114 [2024-12-06 16:56:41.682575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.682589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.114 [2024-12-06 16:56:41.682595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.682609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.114 [2024-12-06 16:56:41.682615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.682628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.114 [2024-12-06 16:56:41.682634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.682647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.114 [2024-12-06 16:56:41.682655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.682666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.682672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.682683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.114 [2024-12-06 16:56:41.682695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1708f40, cid 0, qid 0 00:29:53.114 [2024-12-06 16:56:41.682700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17090c0, cid 1, qid 0 00:29:53.114 [2024-12-06 16:56:41.682705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709240, cid 2, qid 0 00:29:53.114 [2024-12-06 16:56:41.682710] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.114 [2024-12-06 16:56:41.682715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709540, cid 4, qid 0 00:29:53.114 [2024-12-06 16:56:41.682980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.114 [2024-12-06 16:56:41.682987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.114 [2024-12-06 16:56:41.682990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.682994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709540) on tqpair=0x16adde0 00:29:53.114 [2024-12-06 16:56:41.682999] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:53.114 [2024-12-06 16:56:41.683004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.683013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.683021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.683029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.683033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.683036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.683043] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:53.114 [2024-12-06 16:56:41.683053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709540, cid 4, qid 0 00:29:53.114 [2024-12-06 16:56:41.687111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.114 [2024-12-06 16:56:41.687120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.114 [2024-12-06 16:56:41.687123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.687127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709540) on tqpair=0x16adde0 00:29:53.114 [2024-12-06 16:56:41.687197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.687208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.687216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.687219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.687229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.114 [2024-12-06 16:56:41.687241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709540, cid 4, qid 0 00:29:53.114 [2024-12-06 16:56:41.687429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.114 [2024-12-06 16:56:41.687436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.114 [2024-12-06 16:56:41.687440] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.687444] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=4096, cccid=4 00:29:53.114 [2024-12-06 16:56:41.687448] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709540) on tqpair(0x16adde0): expected_datao=0, payload_size=4096 00:29:53.114 [2024-12-06 16:56:41.687453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.687469] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.687473] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.732111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.114 [2024-12-06 16:56:41.732122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.114 [2024-12-06 16:56:41.732126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.732130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709540) on tqpair=0x16adde0 00:29:53.114 [2024-12-06 16:56:41.732142] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:53.114 [2024-12-06 16:56:41.732154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.732165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.732172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.732176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.732183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.114 [2024-12-06 16:56:41.732195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709540, cid 4, qid 0 00:29:53.114 [2024-12-06 16:56:41.732407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.114 [2024-12-06 16:56:41.732413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.114 [2024-12-06 16:56:41.732417] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.732421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=4096, cccid=4 00:29:53.114 [2024-12-06 16:56:41.732425] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709540) on tqpair(0x16adde0): expected_datao=0, payload_size=4096 00:29:53.114 [2024-12-06 16:56:41.732430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.732444] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.732448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.773332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.114 [2024-12-06 16:56:41.773343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.114 [2024-12-06 16:56:41.773347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.773351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709540) on tqpair=0x16adde0 00:29:53.114 [2024-12-06 16:56:41.773370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.773380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:53.114 [2024-12-06 16:56:41.773391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.114 [2024-12-06 16:56:41.773395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16adde0) 00:29:53.114 [2024-12-06 16:56:41.773402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.114 [2024-12-06 16:56:41.773415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709540, cid 4, qid 0 00:29:53.114 [2024-12-06 16:56:41.773592] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.114 [2024-12-06 16:56:41.773598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.114 [2024-12-06 16:56:41.773602] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.115 [2024-12-06 16:56:41.773606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=4096, cccid=4 00:29:53.115 [2024-12-06 16:56:41.773610] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709540) on tqpair(0x16adde0): expected_datao=0, payload_size=4096 00:29:53.115 [2024-12-06 16:56:41.773614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.115 [2024-12-06 16:56:41.773628] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.115 [2024-12-06 16:56:41.773632] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.377 [2024-12-06 16:56:41.815336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.377 [2024-12-06 16:56:41.815349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.377 [2024-12-06 16:56:41.815353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.377 [2024-12-06 16:56:41.815357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709540) on tqpair=0x16adde0 00:29:53.377 [2024-12-06 16:56:41.815366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:53.377 [2024-12-06 16:56:41.815375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:53.377 [2024-12-06 16:56:41.815386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:53.377 [2024-12-06 16:56:41.815394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:53.377 [2024-12-06 16:56:41.815400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:53.377 [2024-12-06 16:56:41.815406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:53.377 [2024-12-06 16:56:41.815411] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:53.377 [2024-12-06 16:56:41.815416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:53.377 [2024-12-06 16:56:41.815422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:53.377 [2024-12-06 16:56:41.815439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.377 [2024-12-06 16:56:41.815443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16adde0) 00:29:53.377 [2024-12-06 16:56:41.815451] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.377 [2024-12-06 16:56:41.815458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.377 [2024-12-06 16:56:41.815461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.377 [2024-12-06 16:56:41.815465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16adde0) 00:29:53.377 [2024-12-06 16:56:41.815475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:53.377 [2024-12-06 16:56:41.815490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709540, cid 4, qid 0 00:29:53.377 [2024-12-06 16:56:41.815496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17096c0, cid 5, qid 0 00:29:53.377 [2024-12-06 16:56:41.815614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.377 [2024-12-06 16:56:41.815620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.377 [2024-12-06 16:56:41.815624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.377 [2024-12-06 16:56:41.815628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709540) on tqpair=0x16adde0 00:29:53.377 [2024-12-06 16:56:41.815634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.377 [2024-12-06 16:56:41.815640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.377 [2024-12-06 16:56:41.815644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.377 [2024-12-06 16:56:41.815648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17096c0) on tqpair=0x16adde0 00:29:53.377 [2024-12-06 16:56:41.815657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.815661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16adde0) 00:29:53.378 [2024-12-06 16:56:41.815667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.378 [2024-12-06 16:56:41.815678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17096c0, cid 5, qid 0 00:29:53.378 [2024-12-06 16:56:41.815869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.378 [2024-12-06 16:56:41.815876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.378 [2024-12-06 16:56:41.815879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.815883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17096c0) on tqpair=0x16adde0 00:29:53.378 [2024-12-06 16:56:41.815892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.815896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16adde0) 00:29:53.378 [2024-12-06 16:56:41.815902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.378 [2024-12-06 16:56:41.815913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17096c0, cid 5, qid 0 00:29:53.378 [2024-12-06 16:56:41.816116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.378 [2024-12-06 16:56:41.816123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.378 [2024-12-06 16:56:41.816127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17096c0) on tqpair=0x16adde0 00:29:53.378 [2024-12-06 16:56:41.816140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16adde0) 00:29:53.378 [2024-12-06 16:56:41.816150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.378 [2024-12-06 16:56:41.816160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17096c0, cid 5, qid 0 00:29:53.378 [2024-12-06 16:56:41.816366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.378 [2024-12-06 16:56:41.816373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.378 [2024-12-06 16:56:41.816376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17096c0) on tqpair=0x16adde0 00:29:53.378 [2024-12-06 16:56:41.816396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16adde0) 00:29:53.378 [2024-12-06 16:56:41.816413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.378 [2024-12-06 16:56:41.816420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16adde0) 00:29:53.378 [2024-12-06 16:56:41.816430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.378 [2024-12-06 16:56:41.816438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16adde0) 00:29:53.378 [2024-12-06 16:56:41.816447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.378 [2024-12-06 16:56:41.816455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16adde0) 00:29:53.378 [2024-12-06 16:56:41.816464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.378 [2024-12-06 16:56:41.816476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17096c0, cid 5, qid 0 00:29:53.378 [2024-12-06 16:56:41.816481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709540, cid 4, qid 0 00:29:53.378 [2024-12-06 16:56:41.816486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1709840, cid 6, qid 0 00:29:53.378 [2024-12-06 16:56:41.816491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17099c0, cid 7, qid 0 00:29:53.378 [2024-12-06 16:56:41.816766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.378 [2024-12-06 16:56:41.816773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.378 [2024-12-06 16:56:41.816776] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816780] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=8192, cccid=5 00:29:53.378 [2024-12-06 16:56:41.816785] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17096c0) on tqpair(0x16adde0): expected_datao=0, payload_size=8192 00:29:53.378 [2024-12-06 16:56:41.816789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816898] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816903] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.378 [2024-12-06 16:56:41.816914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.378 [2024-12-06 16:56:41.816918] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816921] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=512, cccid=4 00:29:53.378 [2024-12-06 16:56:41.816926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709540) on tqpair(0x16adde0): expected_datao=0, payload_size=512 00:29:53.378 [2024-12-06 16:56:41.816930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816937] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816940] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.378 [2024-12-06 16:56:41.816952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.378 [2024-12-06 16:56:41.816955] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816961] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=512, cccid=6 00:29:53.378 [2024-12-06 16:56:41.816965] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1709840) on tqpair(0x16adde0): expected_datao=0, payload_size=512 00:29:53.378 [2024-12-06 16:56:41.816969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816976] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816979] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:53.378 [2024-12-06 16:56:41.816991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:53.378 [2024-12-06 16:56:41.816994] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.816998] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16adde0): datao=0, datal=4096, cccid=7 00:29:53.378 [2024-12-06 16:56:41.817002] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17099c0) on tqpair(0x16adde0): expected_datao=0, payload_size=4096 00:29:53.378 [2024-12-06 16:56:41.817006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.817013] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.817017] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.817025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.378 [2024-12-06 16:56:41.817030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.378 [2024-12-06 16:56:41.817034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.817038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17096c0) on tqpair=0x16adde0 00:29:53.378 [2024-12-06 16:56:41.817050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.378 [2024-12-06 16:56:41.817055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.378 [2024-12-06 16:56:41.817059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.817063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709540) on tqpair=0x16adde0 00:29:53.378 [2024-12-06 16:56:41.817073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.378 [2024-12-06 16:56:41.817079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.378 [2024-12-06 16:56:41.817082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.817086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709840) on tqpair=0x16adde0 00:29:53.378 [2024-12-06 16:56:41.817093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.378 [2024-12-06 16:56:41.817099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.378 [2024-12-06 16:56:41.821110] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.378 [2024-12-06 16:56:41.821114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17099c0) on tqpair=0x16adde0 00:29:53.378 ===================================================== 00:29:53.378 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.378 ===================================================== 00:29:53.378 Controller Capabilities/Features 00:29:53.378 ================================ 00:29:53.378 Vendor ID: 8086 00:29:53.378 Subsystem Vendor ID: 8086 00:29:53.378 Serial Number: SPDK00000000000001 00:29:53.378 Model Number: SPDK bdev Controller 00:29:53.378 Firmware Version: 25.01 00:29:53.378 Recommended Arb Burst: 6 00:29:53.378 IEEE OUI Identifier: e4 d2 5c 00:29:53.378 Multi-path I/O 00:29:53.378 May have multiple subsystem ports: Yes 00:29:53.378 May have multiple controllers: Yes 00:29:53.378 Associated with SR-IOV VF: No 00:29:53.378 Max Data Transfer Size: 131072 00:29:53.378 Max Number of Namespaces: 32 00:29:53.378 Max Number of I/O Queues: 127 00:29:53.378 NVMe Specification Version (VS): 1.3 00:29:53.378 NVMe Specification Version (Identify): 1.3 00:29:53.378 Maximum Queue Entries: 128 00:29:53.378 Contiguous Queues Required: Yes 00:29:53.378 Arbitration Mechanisms Supported 00:29:53.378 Weighted Round Robin: Not Supported 00:29:53.378 Vendor Specific: Not Supported 00:29:53.378 Reset Timeout: 15000 ms 00:29:53.378 Doorbell Stride: 4 bytes 00:29:53.378 NVM Subsystem Reset: Not Supported 00:29:53.378 Command Sets Supported 00:29:53.378 NVM Command Set: Supported 00:29:53.378 Boot Partition: Not Supported 00:29:53.379 Memory Page Size Minimum: 4096 bytes 00:29:53.379 Memory Page Size Maximum: 4096 bytes 00:29:53.379 Persistent Memory Region: Not Supported 00:29:53.379 Optional Asynchronous Events Supported 00:29:53.379 Namespace Attribute Notices: Supported 00:29:53.379 Firmware Activation Notices: Not Supported 00:29:53.379 ANA Change Notices: Not Supported 00:29:53.379 PLE Aggregate Log Change Notices: Not Supported 00:29:53.379 LBA Status Info Alert Notices: Not Supported 00:29:53.379 EGE Aggregate Log Change Notices: Not Supported 00:29:53.379 Normal NVM Subsystem Shutdown event: Not Supported 00:29:53.379 Zone Descriptor Change Notices: Not Supported 00:29:53.379 Discovery Log Change Notices: Not Supported 00:29:53.379 Controller Attributes 00:29:53.379 128-bit Host Identifier: Supported 00:29:53.379 Non-Operational Permissive Mode: Not Supported 00:29:53.379 NVM Sets: Not Supported 00:29:53.379 Read Recovery Levels: Not Supported 00:29:53.379 Endurance Groups: Not Supported 00:29:53.379 Predictable Latency Mode: Not Supported 00:29:53.379 Traffic Based Keep ALive: Not Supported 00:29:53.379 Namespace Granularity: Not Supported 00:29:53.379 SQ Associations: Not Supported 00:29:53.379 UUID List: Not Supported 00:29:53.379 Multi-Domain Subsystem: Not Supported 00:29:53.379 Fixed Capacity Management: Not Supported 00:29:53.379 Variable Capacity Management: Not Supported 00:29:53.379 Delete Endurance Group: Not Supported 00:29:53.379 Delete NVM Set: Not Supported 00:29:53.379 Extended LBA Formats Supported: Not Supported 00:29:53.379 Flexible Data Placement Supported: Not Supported 00:29:53.379 00:29:53.379 Controller Memory Buffer Support 00:29:53.379 ================================ 00:29:53.379 Supported: No 00:29:53.379 00:29:53.379 Persistent Memory Region Support 00:29:53.379 ================================ 00:29:53.379 Supported: No 00:29:53.379 00:29:53.379 Admin Command Set Attributes 00:29:53.379 ============================ 00:29:53.379 Security Send/Receive: Not Supported 00:29:53.379 Format NVM: Not Supported 00:29:53.379 Firmware Activate/Download: Not Supported 00:29:53.379 Namespace Management: Not Supported 00:29:53.379 Device Self-Test: Not Supported 00:29:53.379 Directives: Not Supported 00:29:53.379 NVMe-MI: Not Supported 00:29:53.379 Virtualization Management: Not Supported 00:29:53.379 Doorbell Buffer Config: Not Supported 00:29:53.379 Get LBA Status Capability: Not Supported 00:29:53.379 Command & Feature Lockdown Capability: Not Supported 00:29:53.379 Abort Command Limit: 4 00:29:53.379 Async Event Request Limit: 4 00:29:53.379 Number of Firmware Slots: N/A 00:29:53.379 Firmware Slot 1 Read-Only: N/A 00:29:53.379 Firmware Activation Without Reset: N/A 00:29:53.379 Multiple Update Detection Support: N/A 00:29:53.379 Firmware Update Granularity: No Information Provided 00:29:53.379 Per-Namespace SMART Log: No 00:29:53.379 Asymmetric Namespace Access Log Page: Not Supported 00:29:53.379 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:53.379 Command Effects Log Page: Supported 00:29:53.379 Get Log Page Extended Data: Supported 00:29:53.379 Telemetry Log Pages: Not Supported 00:29:53.379 Persistent Event Log Pages: Not Supported 00:29:53.379 Supported Log Pages Log Page: May Support 00:29:53.379 Commands Supported & Effects Log Page: Not Supported 00:29:53.379 Feature Identifiers & Effects Log Page:May Support 00:29:53.379 NVMe-MI Commands & Effects Log Page: May Support 00:29:53.379 Data Area 4 for Telemetry Log: Not Supported 00:29:53.379 Error Log Page Entries Supported: 128 00:29:53.379 Keep Alive: Supported 00:29:53.379 Keep Alive Granularity: 10000 ms 00:29:53.379 00:29:53.379 NVM Command Set Attributes 00:29:53.379 ========================== 00:29:53.379 Submission Queue Entry Size 00:29:53.379 Max: 64 00:29:53.379 Min: 64 00:29:53.379 Completion Queue Entry Size 00:29:53.379 Max: 16 00:29:53.379 Min: 16 00:29:53.379 Number of Namespaces: 32 00:29:53.379 Compare Command: Supported 00:29:53.379 Write Uncorrectable Command: Not Supported 00:29:53.379 Dataset Management Command: Supported 00:29:53.379 Write Zeroes Command: Supported 00:29:53.379 Set Features Save Field: Not Supported 00:29:53.379 Reservations: Supported 00:29:53.379 Timestamp: Not Supported 00:29:53.379 Copy: Supported 00:29:53.379 Volatile Write Cache: Present 00:29:53.379 Atomic Write Unit (Normal): 1 00:29:53.379 Atomic Write Unit (PFail): 1 00:29:53.379 Atomic Compare & Write Unit: 1 00:29:53.379 Fused Compare & Write: Supported 00:29:53.379 Scatter-Gather List 00:29:53.379 SGL Command Set: Supported 00:29:53.379 SGL Keyed: Supported 00:29:53.379 SGL Bit Bucket Descriptor: Not Supported 00:29:53.379 SGL Metadata Pointer: Not Supported 00:29:53.379 Oversized SGL: Not Supported 00:29:53.379 SGL Metadata Address: Not Supported 00:29:53.379 SGL Offset: Supported 00:29:53.379 Transport SGL Data Block: Not Supported 00:29:53.379 Replay Protected Memory Block: Not Supported 00:29:53.379 00:29:53.379 Firmware Slot Information 00:29:53.379 ========================= 00:29:53.379 Active slot: 1 00:29:53.379 Slot 1 Firmware Revision: 25.01 00:29:53.379 00:29:53.379 00:29:53.379 Commands Supported and Effects 00:29:53.379 ============================== 00:29:53.379 Admin Commands 00:29:53.379 -------------- 00:29:53.379 Get Log Page (02h): Supported 00:29:53.379 Identify (06h): Supported 00:29:53.379 Abort (08h): Supported 00:29:53.379 Set Features (09h): Supported 00:29:53.379 Get Features (0Ah): Supported 00:29:53.379 Asynchronous Event Request (0Ch): Supported 00:29:53.379 Keep Alive (18h): Supported 00:29:53.379 I/O Commands 00:29:53.379 ------------ 00:29:53.379 Flush (00h): Supported LBA-Change 00:29:53.379 Write (01h): Supported LBA-Change 00:29:53.379 Read (02h): Supported 00:29:53.379 Compare (05h): Supported 00:29:53.379 Write Zeroes (08h): Supported LBA-Change 00:29:53.379 Dataset Management (09h): Supported LBA-Change 00:29:53.379 Copy (19h): Supported LBA-Change 00:29:53.379 00:29:53.379 Error Log 00:29:53.379 ========= 00:29:53.379 00:29:53.379 Arbitration 00:29:53.379 =========== 00:29:53.379 Arbitration Burst: 1 00:29:53.379 00:29:53.379 Power Management 00:29:53.379 ================ 00:29:53.379 Number of Power States: 1 00:29:53.379 Current Power State: Power State #0 00:29:53.379 Power State #0: 00:29:53.379 Max Power: 0.00 W 00:29:53.379 Non-Operational State: Operational 00:29:53.379 Entry Latency: Not Reported 00:29:53.379 Exit Latency: Not Reported 00:29:53.379 Relative Read Throughput: 0 00:29:53.379 Relative Read Latency: 0 00:29:53.379 Relative Write Throughput: 0 00:29:53.379 Relative Write Latency: 0 00:29:53.379 Idle Power: Not Reported 00:29:53.379 Active Power: Not Reported 00:29:53.379 Non-Operational Permissive Mode: Not Supported 00:29:53.379 00:29:53.379 Health Information 00:29:53.379 ================== 00:29:53.379 Critical Warnings: 00:29:53.379 Available Spare Space: OK 00:29:53.379 Temperature: OK 00:29:53.379 Device Reliability: OK 00:29:53.379 Read Only: No 00:29:53.379 Volatile Memory Backup: OK 00:29:53.379 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:53.379 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:53.379 Available Spare: 0% 00:29:53.379 Available Spare Threshold: 0% 00:29:53.379 Life Percentage Used:[2024-12-06 16:56:41.821218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.379 [2024-12-06 16:56:41.821223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16adde0) 00:29:53.379 [2024-12-06 16:56:41.821231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.379 [2024-12-06 16:56:41.821245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17099c0, cid 7, qid 0 00:29:53.379 [2024-12-06 16:56:41.821466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.379 [2024-12-06 16:56:41.821473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.379 [2024-12-06 16:56:41.821476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.379 [2024-12-06 16:56:41.821480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17099c0) on tqpair=0x16adde0 00:29:53.379 [2024-12-06 16:56:41.821516] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:53.379 [2024-12-06 16:56:41.821530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1708f40) on tqpair=0x16adde0 00:29:53.379 [2024-12-06 16:56:41.821537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.379 [2024-12-06 16:56:41.821542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17090c0) on tqpair=0x16adde0 00:29:53.379 [2024-12-06 16:56:41.821547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.379 [2024-12-06 16:56:41.821552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1709240) on tqpair=0x16adde0 00:29:53.379 [2024-12-06 16:56:41.821557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.380 [2024-12-06 16:56:41.821562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.821566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:53.380 [2024-12-06 16:56:41.821575] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.821579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.821583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.821590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.821602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.821820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.821826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.821830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.821834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.821841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.821845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.821848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.821855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.821868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.822088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.822094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.822098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.822116] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:53.380 [2024-12-06 16:56:41.822121] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:53.380 [2024-12-06 16:56:41.822131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.822145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.822156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.822423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.822429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.822433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.822447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.822461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.822472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.822675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.822681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.822685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.822699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.822714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.822724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.822928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.822935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.822938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.822952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.822959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.822966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.822976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.823158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.823165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.823168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.823182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.823196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.823207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.823430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.823437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.823442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.823456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.823470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.823480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.823682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.823688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.823691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.823705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823709] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823712] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.823719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.823729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.823986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.823992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.823995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.823999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.824009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.824023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.824033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.824200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.824206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.824210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.824223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.380 [2024-12-06 16:56:41.824238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.380 [2024-12-06 16:56:41.824248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.380 [2024-12-06 16:56:41.824487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.380 [2024-12-06 16:56:41.824493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.380 [2024-12-06 16:56:41.824497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.380 [2024-12-06 16:56:41.824513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.380 [2024-12-06 16:56:41.824520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.381 [2024-12-06 16:56:41.824527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.381 [2024-12-06 16:56:41.824537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.381 [2024-12-06 16:56:41.824793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.381 [2024-12-06 16:56:41.824799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.381 [2024-12-06 16:56:41.824802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.381 [2024-12-06 16:56:41.824806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.381 [2024-12-06 16:56:41.824816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.381 [2024-12-06 16:56:41.824820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.381 [2024-12-06 16:56:41.824823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.381 [2024-12-06 16:56:41.824830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.381 [2024-12-06 16:56:41.824840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.381 [2024-12-06 16:56:41.825093] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.381 [2024-12-06 16:56:41.829104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.381 [2024-12-06 16:56:41.829109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.381 [2024-12-06 16:56:41.829113] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.381 [2024-12-06 16:56:41.829124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:53.381 [2024-12-06 16:56:41.829128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:53.381 [2024-12-06 16:56:41.829131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16adde0) 00:29:53.381 [2024-12-06 16:56:41.829138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:53.381 [2024-12-06 16:56:41.829149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17093c0, cid 3, qid 0 00:29:53.381 [2024-12-06 16:56:41.829339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:53.381 [2024-12-06 16:56:41.829346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:53.381 [2024-12-06 16:56:41.829349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:53.381 [2024-12-06 16:56:41.829353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17093c0) on tqpair=0x16adde0 00:29:53.381 [2024-12-06 16:56:41.829362] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:29:53.381 0% 00:29:53.381 Data Units Read: 0 00:29:53.381 Data Units Written: 0 00:29:53.381 Host Read Commands: 0 00:29:53.381 Host Write Commands: 0 00:29:53.381 Controller Busy Time: 0 minutes 00:29:53.381 Power Cycles: 0 00:29:53.381 Power On Hours: 0 hours 00:29:53.381 Unsafe Shutdowns: 0 00:29:53.381 Unrecoverable Media Errors: 0 00:29:53.381 Lifetime Error Log Entries: 0 00:29:53.381 Warning Temperature Time: 0 minutes 00:29:53.381 Critical Temperature Time: 0 minutes 00:29:53.381 00:29:53.381 Number of Queues 00:29:53.381 ================ 00:29:53.381 Number of I/O Submission Queues: 127 00:29:53.381 Number of I/O Completion Queues: 127 00:29:53.381 00:29:53.381 Active Namespaces 00:29:53.381 ================= 00:29:53.381 Namespace ID:1 00:29:53.381 Error Recovery Timeout: Unlimited 00:29:53.381 Command Set Identifier: NVM (00h) 00:29:53.381 Deallocate: Supported 00:29:53.381 Deallocated/Unwritten Error: Not Supported 00:29:53.381 Deallocated Read Value: Unknown 00:29:53.381 Deallocate in Write Zeroes: Not Supported 00:29:53.381 Deallocated Guard Field: 0xFFFF 00:29:53.381 Flush: Supported 00:29:53.381 Reservation: Supported 00:29:53.381 Namespace Sharing Capabilities: Multiple Controllers 00:29:53.381 Size (in LBAs): 131072 (0GiB) 00:29:53.381 Capacity (in LBAs): 131072 (0GiB) 00:29:53.381 Utilization (in LBAs): 131072 (0GiB) 00:29:53.381 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:53.381 EUI64: ABCDEF0123456789 00:29:53.381 UUID: 5351d374-6845-4b17-8978-131e650d14f8 00:29:53.381 Thin Provisioning: Not Supported 00:29:53.381 Per-NS Atomic Units: Yes 00:29:53.381 Atomic Boundary Size (Normal): 0 00:29:53.381 Atomic Boundary Size (PFail): 0 00:29:53.381 Atomic Boundary Offset: 0 00:29:53.381 Maximum Single Source Range Length: 65535 00:29:53.381 Maximum Copy Length: 65535 00:29:53.381 Maximum Source Range Count: 1 00:29:53.381 NGUID/EUI64 Never Reused: No 00:29:53.381 Namespace Write Protected: No 00:29:53.381 Number of LBA Formats: 1 00:29:53.381 Current LBA Format: LBA Format #00 00:29:53.381 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:53.381 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:53.381 rmmod nvme_tcp 00:29:53.381 rmmod nvme_fabrics 00:29:53.381 rmmod nvme_keyring 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2401255 ']' 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2401255 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2401255 ']' 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2401255 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401255 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401255' 00:29:53.381 killing process with pid 2401255 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2401255 00:29:53.381 16:56:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2401255 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:53.641 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.642 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:53.642 16:56:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.547 16:56:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:55.547 00:29:55.547 real 0m9.386s 00:29:55.547 user 0m7.805s 00:29:55.547 sys 0m4.546s 00:29:55.547 16:56:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.547 16:56:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:55.547 ************************************ 00:29:55.547 END TEST nvmf_identify 00:29:55.547 ************************************ 00:29:55.548 16:56:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:55.548 16:56:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:55.548 16:56:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.548 16:56:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:55.808 ************************************ 00:29:55.808 START TEST nvmf_perf 00:29:55.808 ************************************ 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:55.808 * Looking for test storage... 00:29:55.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.808 --rc genhtml_branch_coverage=1 00:29:55.808 --rc genhtml_function_coverage=1 00:29:55.808 --rc genhtml_legend=1 00:29:55.808 --rc geninfo_all_blocks=1 00:29:55.808 --rc geninfo_unexecuted_blocks=1 00:29:55.808 00:29:55.808 ' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.808 --rc genhtml_branch_coverage=1 00:29:55.808 --rc genhtml_function_coverage=1 00:29:55.808 --rc genhtml_legend=1 00:29:55.808 --rc geninfo_all_blocks=1 00:29:55.808 --rc geninfo_unexecuted_blocks=1 00:29:55.808 00:29:55.808 ' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.808 --rc genhtml_branch_coverage=1 00:29:55.808 --rc genhtml_function_coverage=1 00:29:55.808 --rc genhtml_legend=1 00:29:55.808 --rc geninfo_all_blocks=1 00:29:55.808 --rc geninfo_unexecuted_blocks=1 00:29:55.808 00:29:55.808 ' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:55.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.808 --rc genhtml_branch_coverage=1 00:29:55.808 --rc genhtml_function_coverage=1 00:29:55.808 --rc genhtml_legend=1 00:29:55.808 --rc geninfo_all_blocks=1 00:29:55.808 --rc geninfo_unexecuted_blocks=1 00:29:55.808 00:29:55.808 ' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.808 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:55.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:55.809 16:56:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:02.383 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:02.383 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:02.383 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:02.384 Found net devices under 0000:31:00.0: cvl_0_0 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:02.384 Found net devices under 0000:31:00.1: cvl_0_1 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:02.384 16:56:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:02.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:02.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:30:02.384 00:30:02.384 --- 10.0.0.2 ping statistics --- 00:30:02.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.384 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:02.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:02.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:30:02.384 00:30:02.384 --- 10.0.0.1 ping statistics --- 00:30:02.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:02.384 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2405951 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2405951 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2405951 ']' 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:02.384 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:02.384 [2024-12-06 16:56:50.138133] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:30:02.384 [2024-12-06 16:56:50.138200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.384 [2024-12-06 16:56:50.231084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:02.384 [2024-12-06 16:56:50.259139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.385 [2024-12-06 16:56:50.259188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.385 [2024-12-06 16:56:50.259196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.385 [2024-12-06 16:56:50.259204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.385 [2024-12-06 16:56:50.259210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.385 [2024-12-06 16:56:50.261067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.385 [2024-12-06 16:56:50.261229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:02.385 [2024-12-06 16:56:50.261430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.385 [2024-12-06 16:56:50.261430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:02.385 16:56:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:02.953 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:02.953 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:02.953 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:30:02.953 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:03.212 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:03.212 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:30:03.212 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:03.212 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:03.212 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:03.472 [2024-12-06 16:56:51.962632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.472 16:56:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.472 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:03.472 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.731 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:03.732 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:03.992 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:03.992 [2024-12-06 16:56:52.606422] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.992 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:04.251 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:30:04.251 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:04.251 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:04.251 16:56:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:30:05.630 Initializing NVMe Controllers 00:30:05.630 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:30:05.630 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:30:05.630 Initialization complete. Launching workers. 00:30:05.630 ======================================================== 00:30:05.630 Latency(us) 00:30:05.630 Device Information : IOPS MiB/s Average min max 00:30:05.630 PCIE (0000:65:00.0) NSID 1 from core 0: 108750.45 424.81 293.67 11.36 8213.58 00:30:05.630 ======================================================== 00:30:05.630 Total : 108750.45 424.81 293.67 11.36 8213.58 00:30:05.630 00:30:05.630 16:56:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:06.566 Initializing NVMe Controllers 00:30:06.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:06.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:06.566 Initialization complete. Launching workers. 00:30:06.566 ======================================================== 00:30:06.566 Latency(us) 00:30:06.566 Device Information : IOPS MiB/s Average min max 00:30:06.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11845.50 257.53 45606.72 00:30:06.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24551.41 7963.22 50878.62 00:30:06.566 ======================================================== 00:30:06.566 Total : 128.00 0.50 15915.36 257.53 50878.62 00:30:06.566 00:30:06.566 16:56:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:07.942 Initializing NVMe Controllers 00:30:07.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:07.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:07.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:07.942 Initialization complete. Launching workers. 00:30:07.942 ======================================================== 00:30:07.942 Latency(us) 00:30:07.942 Device Information : IOPS MiB/s Average min max 00:30:07.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11879.00 46.40 2695.50 487.24 6202.28 00:30:07.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3758.00 14.68 8562.37 7244.67 15970.73 00:30:07.942 ======================================================== 00:30:07.942 Total : 15637.00 61.08 4105.47 487.24 15970.73 00:30:07.942 00:30:07.942 16:56:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:07.942 16:56:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:07.942 16:56:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:10.481 Initializing NVMe Controllers 00:30:10.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.481 Controller IO queue size 128, less than required. 00:30:10.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.481 Controller IO queue size 128, less than required. 00:30:10.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:10.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:10.481 Initialization complete. Launching workers. 00:30:10.481 ======================================================== 00:30:10.481 Latency(us) 00:30:10.481 Device Information : IOPS MiB/s Average min max 00:30:10.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1798.45 449.61 72552.16 45095.19 109955.29 00:30:10.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 614.48 153.62 214945.63 72772.11 315902.67 00:30:10.481 ======================================================== 00:30:10.481 Total : 2412.93 603.23 108814.40 45095.19 315902.67 00:30:10.481 00:30:10.481 16:56:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:10.481 No valid NVMe controllers or AIO or URING devices found 00:30:10.481 Initializing NVMe Controllers 00:30:10.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:10.481 Controller IO queue size 128, less than required. 00:30:10.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.481 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:10.481 Controller IO queue size 128, less than required. 00:30:10.481 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:10.481 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:10.481 WARNING: Some requested NVMe devices were skipped 00:30:10.482 16:56:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:13.015 Initializing NVMe Controllers 00:30:13.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:13.015 Controller IO queue size 128, less than required. 00:30:13.015 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:13.015 Controller IO queue size 128, less than required. 00:30:13.015 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:13.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:13.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:13.015 Initialization complete. Launching workers. 00:30:13.015 00:30:13.015 ==================== 00:30:13.015 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:13.015 TCP transport: 00:30:13.015 polls: 42831 00:30:13.015 idle_polls: 26075 00:30:13.015 sock_completions: 16756 00:30:13.016 nvme_completions: 6653 00:30:13.016 submitted_requests: 9992 00:30:13.016 queued_requests: 1 00:30:13.016 00:30:13.016 ==================== 00:30:13.016 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:13.016 TCP transport: 00:30:13.016 polls: 41345 00:30:13.016 idle_polls: 24185 00:30:13.016 sock_completions: 17160 00:30:13.016 nvme_completions: 6673 00:30:13.016 submitted_requests: 9978 00:30:13.016 queued_requests: 1 00:30:13.016 ======================================================== 00:30:13.016 Latency(us) 00:30:13.016 Device Information : IOPS MiB/s Average min max 00:30:13.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1661.09 415.27 77686.96 43554.69 127741.80 00:30:13.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1666.08 416.52 77999.19 28706.02 122339.88 00:30:13.016 ======================================================== 00:30:13.016 Total : 3327.17 831.79 77843.31 28706.02 127741.80 00:30:13.016 00:30:13.016 16:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:13.016 16:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.016 16:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:13.016 16:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:30:13.016 16:57:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=51818fbb-14ec-4393-bfc4-105784a03f84 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 51818fbb-14ec-4393-bfc4-105784a03f84 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=51818fbb-14ec-4393-bfc4-105784a03f84 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:14.395 { 00:30:14.395 "uuid": "51818fbb-14ec-4393-bfc4-105784a03f84", 00:30:14.395 "name": "lvs_0", 00:30:14.395 "base_bdev": "Nvme0n1", 00:30:14.395 "total_data_clusters": 457407, 00:30:14.395 "free_clusters": 457407, 00:30:14.395 "block_size": 512, 00:30:14.395 "cluster_size": 4194304 00:30:14.395 } 00:30:14.395 ]' 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="51818fbb-14ec-4393-bfc4-105784a03f84") .free_clusters' 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=457407 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="51818fbb-14ec-4393-bfc4-105784a03f84") .cluster_size' 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1829628 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1829628 00:30:14.395 1829628 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:14.395 16:57:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 51818fbb-14ec-4393-bfc4-105784a03f84 lbd_0 20480 00:30:14.395 16:57:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=90dd210e-aa8e-4b7c-8c73-ec91cd1365a0 00:30:14.395 16:57:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 90dd210e-aa8e-4b7c-8c73-ec91cd1365a0 lvs_n_0 00:30:16.303 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=4b52bc1d-c4aa-462b-bd34-89a1bf0d3903 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 4b52bc1d-c4aa-462b-bd34-89a1bf0d3903 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=4b52bc1d-c4aa-462b-bd34-89a1bf0d3903 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:16.304 { 00:30:16.304 "uuid": "51818fbb-14ec-4393-bfc4-105784a03f84", 00:30:16.304 "name": "lvs_0", 00:30:16.304 "base_bdev": "Nvme0n1", 00:30:16.304 "total_data_clusters": 457407, 00:30:16.304 "free_clusters": 452287, 00:30:16.304 "block_size": 512, 00:30:16.304 "cluster_size": 4194304 00:30:16.304 }, 00:30:16.304 { 00:30:16.304 "uuid": "4b52bc1d-c4aa-462b-bd34-89a1bf0d3903", 00:30:16.304 "name": "lvs_n_0", 00:30:16.304 "base_bdev": "90dd210e-aa8e-4b7c-8c73-ec91cd1365a0", 00:30:16.304 "total_data_clusters": 5114, 00:30:16.304 "free_clusters": 5114, 00:30:16.304 "block_size": 512, 00:30:16.304 "cluster_size": 4194304 00:30:16.304 } 00:30:16.304 ]' 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="4b52bc1d-c4aa-462b-bd34-89a1bf0d3903") .free_clusters' 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="4b52bc1d-c4aa-462b-bd34-89a1bf0d3903") .cluster_size' 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:16.304 20456 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:16.304 16:57:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4b52bc1d-c4aa-462b-bd34-89a1bf0d3903 lbd_nest_0 20456 00:30:16.563 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=4a9babd8-672e-4bba-82bf-5c45e1313c48 00:30:16.563 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:16.563 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:16.563 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4a9babd8-672e-4bba-82bf-5c45e1313c48 00:30:16.822 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.080 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:17.080 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:17.080 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:17.080 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:17.080 16:57:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:29.290 Initializing NVMe Controllers 00:30:29.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:29.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:29.290 Initialization complete. Launching workers. 00:30:29.290 ======================================================== 00:30:29.290 Latency(us) 00:30:29.290 Device Information : IOPS MiB/s Average min max 00:30:29.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.10 0.02 21738.34 108.53 49714.23 00:30:29.290 ======================================================== 00:30:29.290 Total : 46.10 0.02 21738.34 108.53 49714.23 00:30:29.290 00:30:29.290 16:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:29.290 16:57:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:37.419 Initializing NVMe Controllers 00:30:37.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:37.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:37.419 Initialization complete. Launching workers. 00:30:37.419 ======================================================== 00:30:37.419 Latency(us) 00:30:37.419 Device Information : IOPS MiB/s Average min max 00:30:37.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 61.68 7.71 16212.88 6984.26 47890.36 00:30:37.419 ======================================================== 00:30:37.419 Total : 61.68 7.71 16212.88 6984.26 47890.36 00:30:37.419 00:30:37.678 16:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:37.678 16:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:37.678 16:57:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:47.884 Initializing NVMe Controllers 00:30:47.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:47.884 Initialization complete. Launching workers. 00:30:47.884 ======================================================== 00:30:47.884 Latency(us) 00:30:47.884 Device Information : IOPS MiB/s Average min max 00:30:47.884 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8883.10 4.34 3602.14 355.61 10050.88 00:30:47.884 ======================================================== 00:30:47.884 Total : 8883.10 4.34 3602.14 355.61 10050.88 00:30:47.884 00:30:47.884 16:57:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:47.884 16:57:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.134 Initializing NVMe Controllers 00:31:00.134 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.134 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.134 Initialization complete. Launching workers. 00:31:00.134 ======================================================== 00:31:00.134 Latency(us) 00:31:00.134 Device Information : IOPS MiB/s Average min max 00:31:00.134 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3961.80 495.22 8080.41 747.95 21767.66 00:31:00.134 ======================================================== 00:31:00.134 Total : 3961.80 495.22 8080.41 747.95 21767.66 00:31:00.134 00:31:00.134 16:57:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:00.134 16:57:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:00.134 16:57:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.111 Initializing NVMe Controllers 00:31:10.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.111 Controller IO queue size 128, less than required. 00:31:10.111 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:10.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.111 Initialization complete. Launching workers. 00:31:10.111 ======================================================== 00:31:10.111 Latency(us) 00:31:10.111 Device Information : IOPS MiB/s Average min max 00:31:10.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15858.13 7.74 8076.48 1314.96 19118.90 00:31:10.111 ======================================================== 00:31:10.111 Total : 15858.13 7.74 8076.48 1314.96 19118.90 00:31:10.111 00:31:10.111 16:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:10.111 16:57:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.108 Initializing NVMe Controllers 00:31:20.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.108 Controller IO queue size 128, less than required. 00:31:20.108 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:20.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.108 Initialization complete. Launching workers. 00:31:20.108 ======================================================== 00:31:20.108 Latency(us) 00:31:20.108 Device Information : IOPS MiB/s Average min max 00:31:20.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1171.80 146.47 109522.47 16819.75 235402.89 00:31:20.108 ======================================================== 00:31:20.108 Total : 1171.80 146.47 109522.47 16819.75 235402.89 00:31:20.108 00:31:20.108 16:58:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.108 16:58:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4a9babd8-672e-4bba-82bf-5c45e1313c48 00:31:20.676 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:20.935 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 90dd210e-aa8e-4b7c-8c73-ec91cd1365a0 00:31:20.935 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:21.194 rmmod nvme_tcp 00:31:21.194 rmmod nvme_fabrics 00:31:21.194 rmmod nvme_keyring 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2405951 ']' 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2405951 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2405951 ']' 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2405951 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2405951 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:21.194 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2405951' 00:31:21.194 killing process with pid 2405951 00:31:21.195 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2405951 00:31:21.195 16:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2405951 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.102 16:58:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:25.638 00:31:25.638 real 1m29.574s 00:31:25.638 user 5m21.404s 00:31:25.638 sys 0m13.552s 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:25.638 ************************************ 00:31:25.638 END TEST nvmf_perf 00:31:25.638 ************************************ 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:25.638 ************************************ 00:31:25.638 START TEST nvmf_fio_host 00:31:25.638 ************************************ 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:25.638 * Looking for test storage... 00:31:25.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:25.638 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:25.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.639 --rc genhtml_branch_coverage=1 00:31:25.639 --rc genhtml_function_coverage=1 00:31:25.639 --rc genhtml_legend=1 00:31:25.639 --rc geninfo_all_blocks=1 00:31:25.639 --rc geninfo_unexecuted_blocks=1 00:31:25.639 00:31:25.639 ' 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:25.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.639 --rc genhtml_branch_coverage=1 00:31:25.639 --rc genhtml_function_coverage=1 00:31:25.639 --rc genhtml_legend=1 00:31:25.639 --rc geninfo_all_blocks=1 00:31:25.639 --rc geninfo_unexecuted_blocks=1 00:31:25.639 00:31:25.639 ' 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:25.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.639 --rc genhtml_branch_coverage=1 00:31:25.639 --rc genhtml_function_coverage=1 00:31:25.639 --rc genhtml_legend=1 00:31:25.639 --rc geninfo_all_blocks=1 00:31:25.639 --rc geninfo_unexecuted_blocks=1 00:31:25.639 00:31:25.639 ' 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:25.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:25.639 --rc genhtml_branch_coverage=1 00:31:25.639 --rc genhtml_function_coverage=1 00:31:25.639 --rc genhtml_legend=1 00:31:25.639 --rc geninfo_all_blocks=1 00:31:25.639 --rc geninfo_unexecuted_blocks=1 00:31:25.639 00:31:25.639 ' 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:25.639 16:58:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:25.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:25.639 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:25.640 16:58:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:30.908 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:30.908 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:30.908 Found net devices under 0000:31:00.0: cvl_0_0 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:30.908 Found net devices under 0000:31:00.1: cvl_0_1 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.908 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.909 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.909 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.909 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.909 16:58:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:31:30.909 00:31:30.909 --- 10.0.0.2 ping statistics --- 00:31:30.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.909 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:31:30.909 00:31:30.909 --- 10.0.0.1 ping statistics --- 00:31:30.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.909 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2428195 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2428195 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2428195 ']' 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.909 16:58:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:30.909 [2024-12-06 16:58:19.311059] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:31:30.909 [2024-12-06 16:58:19.311133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.909 [2024-12-06 16:58:19.402720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.909 [2024-12-06 16:58:19.431126] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.909 [2024-12-06 16:58:19.431176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.909 [2024-12-06 16:58:19.431185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.909 [2024-12-06 16:58:19.431192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.909 [2024-12-06 16:58:19.431199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.909 [2024-12-06 16:58:19.433452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.909 [2024-12-06 16:58:19.433617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.909 [2024-12-06 16:58:19.433740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.909 [2024-12-06 16:58:19.433741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:31.479 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.479 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:31.479 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:31.737 [2024-12-06 16:58:20.241795] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.737 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:31.737 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:31.738 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:31.738 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:31.996 Malloc1 00:31:31.996 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.996 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:32.255 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.255 [2024-12-06 16:58:20.928431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.255 16:58:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:32.514 16:58:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:32.773 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:32.773 fio-3.35 00:31:32.773 Starting 1 thread 00:31:35.308 00:31:35.308 test: (groupid=0, jobs=1): err= 0: pid=2428977: Fri Dec 6 16:58:23 2024 00:31:35.308 read: IOPS=13.9k, BW=54.3MiB/s (57.0MB/s)(109MiB/2005msec) 00:31:35.308 slat (nsec): min=1429, max=100630, avg=1829.12, stdev=902.80 00:31:35.308 clat (usec): min=1597, max=8743, avg=5064.41, stdev=341.85 00:31:35.308 lat (usec): min=1612, max=8744, avg=5066.24, stdev=341.78 00:31:35.308 clat percentiles (usec): 00:31:35.308 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:31:35.308 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5145], 00:31:35.308 | 70.00th=[ 5211], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:31:35.308 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6587], 99.95th=[ 7898], 00:31:35.308 | 99.99th=[ 8455] 00:31:35.308 bw ( KiB/s): min=54320, max=56168, per=100.00%, avg=55688.00, stdev=912.30, samples=4 00:31:35.308 iops : min=13580, max=14042, avg=13922.00, stdev=228.08, samples=4 00:31:35.308 write: IOPS=13.9k, BW=54.4MiB/s (57.1MB/s)(109MiB/2005msec); 0 zone resets 00:31:35.308 slat (nsec): min=1455, max=95205, avg=1880.02, stdev=705.57 00:31:35.308 clat (usec): min=999, max=7944, avg=4068.02, stdev=292.52 00:31:35.308 lat (usec): min=1006, max=7945, avg=4069.90, stdev=292.50 00:31:35.308 clat percentiles (usec): 00:31:35.308 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 3752], 20.00th=[ 3851], 00:31:35.308 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:35.308 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4490], 00:31:35.308 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 5800], 99.95th=[ 6521], 00:31:35.308 | 99.99th=[ 7898] 00:31:35.308 bw ( KiB/s): min=54728, max=56104, per=99.98%, avg=55724.00, stdev=667.46, samples=4 00:31:35.308 iops : min=13682, max=14026, avg=13931.00, stdev=166.87, samples=4 00:31:35.308 lat (usec) : 1000=0.01% 00:31:35.308 lat (msec) : 2=0.04%, 4=19.86%, 10=80.10% 00:31:35.308 cpu : usr=74.55%, sys=24.35%, ctx=36, majf=0, minf=27 00:31:35.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:35.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:35.308 issued rwts: total=27897,27936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:35.308 00:31:35.308 Run status group 0 (all jobs): 00:31:35.308 READ: bw=54.3MiB/s (57.0MB/s), 54.3MiB/s-54.3MiB/s (57.0MB/s-57.0MB/s), io=109MiB (114MB), run=2005-2005msec 00:31:35.308 WRITE: bw=54.4MiB/s (57.1MB/s), 54.4MiB/s-54.4MiB/s (57.1MB/s-57.1MB/s), io=109MiB (114MB), run=2005-2005msec 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:35.308 16:58:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:35.568 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:35.568 fio-3.35 00:31:35.568 Starting 1 thread 00:31:38.103 00:31:38.103 test: (groupid=0, jobs=1): err= 0: pid=2429569: Fri Dec 6 16:58:26 2024 00:31:38.103 read: IOPS=12.2k, BW=190MiB/s (199MB/s)(381MiB/2004msec) 00:31:38.103 slat (nsec): min=2346, max=83745, avg=2613.73, stdev=1098.06 00:31:38.103 clat (usec): min=2398, max=13822, avg=6334.19, stdev=1714.87 00:31:38.103 lat (usec): min=2401, max=13824, avg=6336.80, stdev=1715.04 00:31:38.103 clat percentiles (usec): 00:31:38.103 | 1.00th=[ 3261], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 4752], 00:31:38.103 | 30.00th=[ 5211], 40.00th=[ 5669], 50.00th=[ 6128], 60.00th=[ 6652], 00:31:38.103 | 70.00th=[ 7308], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9241], 00:31:38.103 | 99.00th=[10683], 99.50th=[11207], 99.90th=[12518], 99.95th=[12649], 00:31:38.103 | 99.99th=[12911] 00:31:38.103 bw ( KiB/s): min=92896, max=97280, per=48.96%, avg=95304.00, stdev=2100.72, samples=4 00:31:38.103 iops : min= 5806, max= 6080, avg=5956.50, stdev=131.29, samples=4 00:31:38.103 write: IOPS=7209, BW=113MiB/s (118MB/s)(195MiB/1730msec); 0 zone resets 00:31:38.103 slat (usec): min=27, max=152, avg=29.62, stdev= 4.97 00:31:38.103 clat (usec): min=2806, max=12670, avg=7279.73, stdev=1300.79 00:31:38.103 lat (usec): min=2834, max=12710, avg=7309.34, stdev=1302.89 00:31:38.103 clat percentiles (usec): 00:31:38.103 | 1.00th=[ 4883], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 6194], 00:31:38.103 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7111], 60.00th=[ 7439], 00:31:38.103 | 70.00th=[ 7832], 80.00th=[ 8356], 90.00th=[ 9110], 95.00th=[ 9634], 00:31:38.103 | 99.00th=[10814], 99.50th=[11076], 99.90th=[11994], 99.95th=[12256], 00:31:38.103 | 99.99th=[12649] 00:31:38.103 bw ( KiB/s): min=96544, max=101376, per=86.04%, avg=99256.00, stdev=2246.45, samples=4 00:31:38.103 iops : min= 6034, max= 6336, avg=6203.50, stdev=140.40, samples=4 00:31:38.103 lat (msec) : 4=4.68%, 10=92.84%, 20=2.48% 00:31:38.103 cpu : usr=83.92%, sys=14.33%, ctx=27, majf=0, minf=49 00:31:38.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:31:38.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:38.103 issued rwts: total=24382,12473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.103 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:38.103 00:31:38.103 Run status group 0 (all jobs): 00:31:38.103 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=381MiB (399MB), run=2004-2004msec 00:31:38.103 WRITE: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=195MiB (204MB), run=1730-1730msec 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:31:38.103 16:58:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:31:38.672 Nvme0n1 00:31:38.672 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=6f9b5e1e-31fc-4d7a-b0dd-989035ade1cd 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 6f9b5e1e-31fc-4d7a-b0dd-989035ade1cd 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=6f9b5e1e-31fc-4d7a-b0dd-989035ade1cd 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:39.240 { 00:31:39.240 "uuid": "6f9b5e1e-31fc-4d7a-b0dd-989035ade1cd", 00:31:39.240 "name": "lvs_0", 00:31:39.240 "base_bdev": "Nvme0n1", 00:31:39.240 "total_data_clusters": 1787, 00:31:39.240 "free_clusters": 1787, 00:31:39.240 "block_size": 512, 00:31:39.240 "cluster_size": 1073741824 00:31:39.240 } 00:31:39.240 ]' 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="6f9b5e1e-31fc-4d7a-b0dd-989035ade1cd") .free_clusters' 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1787 00:31:39.240 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="6f9b5e1e-31fc-4d7a-b0dd-989035ade1cd") .cluster_size' 00:31:39.499 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:39.499 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1829888 00:31:39.499 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1829888 00:31:39.499 1829888 00:31:39.499 16:58:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:31:39.499 2c71fc1a-dac6-44a1-8e6f-ac787587e396 00:31:39.499 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:39.759 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:39.759 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:40.019 16:58:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:40.278 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:40.278 fio-3.35 00:31:40.278 Starting 1 thread 00:31:42.815 00:31:42.815 test: (groupid=0, jobs=1): err= 0: pid=2430829: Fri Dec 6 16:58:31 2024 00:31:42.815 read: IOPS=10.3k, BW=40.3MiB/s (42.2MB/s)(80.7MiB/2005msec) 00:31:42.815 slat (nsec): min=1402, max=88133, avg=1569.31, stdev=799.91 00:31:42.815 clat (usec): min=1835, max=11444, avg=6871.91, stdev=517.26 00:31:42.815 lat (usec): min=1847, max=11446, avg=6873.48, stdev=517.21 00:31:42.815 clat percentiles (usec): 00:31:42.815 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6456], 00:31:42.815 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6980], 00:31:42.815 | 70.00th=[ 7111], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7701], 00:31:42.815 | 99.00th=[ 8029], 99.50th=[ 8160], 99.90th=[ 9765], 99.95th=[10552], 00:31:42.815 | 99.99th=[11338] 00:31:42.815 bw ( KiB/s): min=39880, max=42048, per=99.95%, avg=41200.00, stdev=931.60, samples=4 00:31:42.815 iops : min= 9970, max=10512, avg=10300.00, stdev=232.90, samples=4 00:31:42.815 write: IOPS=10.3k, BW=40.3MiB/s (42.3MB/s)(80.8MiB/2005msec); 0 zone resets 00:31:42.815 slat (nsec): min=1435, max=72314, avg=1625.02, stdev=704.82 00:31:42.815 clat (usec): min=794, max=9934, avg=5447.34, stdev=459.94 00:31:42.815 lat (usec): min=800, max=9935, avg=5448.96, stdev=459.96 00:31:42.815 clat percentiles (usec): 00:31:42.815 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5080], 00:31:42.815 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:31:42.815 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5997], 95.00th=[ 6128], 00:31:42.815 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 9110], 99.95th=[ 9634], 00:31:42.815 | 99.99th=[ 9896] 00:31:42.815 bw ( KiB/s): min=40528, max=41680, per=99.94%, avg=41258.00, stdev=531.76, samples=4 00:31:42.815 iops : min=10132, max=10420, avg=10314.50, stdev=132.94, samples=4 00:31:42.815 lat (usec) : 1000=0.01% 00:31:42.815 lat (msec) : 2=0.03%, 4=0.09%, 10=99.84%, 20=0.03% 00:31:42.815 cpu : usr=70.01%, sys=29.34%, ctx=39, majf=0, minf=27 00:31:42.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:42.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:42.815 issued rwts: total=20662,20694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:42.815 00:31:42.815 Run status group 0 (all jobs): 00:31:42.815 READ: bw=40.3MiB/s (42.2MB/s), 40.3MiB/s-40.3MiB/s (42.2MB/s-42.2MB/s), io=80.7MiB (84.6MB), run=2005-2005msec 00:31:42.815 WRITE: bw=40.3MiB/s (42.3MB/s), 40.3MiB/s-40.3MiB/s (42.3MB/s-42.3MB/s), io=80.8MiB (84.8MB), run=2005-2005msec 00:31:42.816 16:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:42.816 16:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=f08ffa04-47f5-4241-b688-008499548f43 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb f08ffa04-47f5-4241-b688-008499548f43 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=f08ffa04-47f5-4241-b688-008499548f43 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:43.753 { 00:31:43.753 "uuid": "6f9b5e1e-31fc-4d7a-b0dd-989035ade1cd", 00:31:43.753 "name": "lvs_0", 00:31:43.753 "base_bdev": "Nvme0n1", 00:31:43.753 "total_data_clusters": 1787, 00:31:43.753 "free_clusters": 0, 00:31:43.753 "block_size": 512, 00:31:43.753 "cluster_size": 1073741824 00:31:43.753 }, 00:31:43.753 { 00:31:43.753 "uuid": "f08ffa04-47f5-4241-b688-008499548f43", 00:31:43.753 "name": "lvs_n_0", 00:31:43.753 "base_bdev": "2c71fc1a-dac6-44a1-8e6f-ac787587e396", 00:31:43.753 "total_data_clusters": 457025, 00:31:43.753 "free_clusters": 457025, 00:31:43.753 "block_size": 512, 00:31:43.753 "cluster_size": 4194304 00:31:43.753 } 00:31:43.753 ]' 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f08ffa04-47f5-4241-b688-008499548f43") .free_clusters' 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=457025 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f08ffa04-47f5-4241-b688-008499548f43") .cluster_size' 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1828100 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1828100 00:31:43.753 1828100 00:31:43.753 16:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:31:44.692 ce59c98a-e084-4111-9218-d35e883c54cf 00:31:44.692 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:44.692 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:44.952 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.212 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:45.212 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:45.212 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:45.212 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:45.212 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:45.212 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:45.212 16:58:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:45.472 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:45.472 fio-3.35 00:31:45.472 Starting 1 thread 00:31:48.009 00:31:48.009 test: (groupid=0, jobs=1): err= 0: pid=2432232: Fri Dec 6 16:58:36 2024 00:31:48.009 read: IOPS=8701, BW=34.0MiB/s (35.6MB/s)(68.2MiB/2006msec) 00:31:48.009 slat (nsec): min=1410, max=119752, avg=1727.80, stdev=1295.39 00:31:48.009 clat (usec): min=2816, max=13421, avg=8125.86, stdev=645.14 00:31:48.009 lat (usec): min=2833, max=13423, avg=8127.58, stdev=645.08 00:31:48.009 clat percentiles (usec): 00:31:48.009 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:31:48.009 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:31:48.009 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9110], 00:31:48.009 | 99.00th=[ 9503], 99.50th=[ 9765], 99.90th=[11076], 99.95th=[11994], 00:31:48.009 | 99.99th=[13435] 00:31:48.009 bw ( KiB/s): min=33216, max=35664, per=99.90%, avg=34774.00, stdev=1079.56, samples=4 00:31:48.009 iops : min= 8304, max= 8916, avg=8693.50, stdev=269.89, samples=4 00:31:48.009 write: IOPS=8695, BW=34.0MiB/s (35.6MB/s)(68.1MiB/2006msec); 0 zone resets 00:31:48.009 slat (nsec): min=1432, max=103363, avg=1786.71, stdev=873.27 00:31:48.009 clat (usec): min=1337, max=12263, avg=6463.48, stdev=561.79 00:31:48.009 lat (usec): min=1345, max=12265, avg=6465.27, stdev=561.76 00:31:48.009 clat percentiles (usec): 00:31:48.009 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 5800], 20.00th=[ 6063], 00:31:48.009 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:31:48.009 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:31:48.009 | 99.00th=[ 7701], 99.50th=[ 7832], 99.90th=[ 9503], 99.95th=[10290], 00:31:48.009 | 99.99th=[12256] 00:31:48.009 bw ( KiB/s): min=34256, max=35136, per=99.97%, avg=34772.00, stdev=378.04, samples=4 00:31:48.009 iops : min= 8564, max= 8784, avg=8693.00, stdev=94.51, samples=4 00:31:48.009 lat (msec) : 2=0.01%, 4=0.10%, 10=99.72%, 20=0.17% 00:31:48.009 cpu : usr=66.53%, sys=32.77%, ctx=49, majf=0, minf=27 00:31:48.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:48.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:48.009 issued rwts: total=17456,17444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.009 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:48.009 00:31:48.009 Run status group 0 (all jobs): 00:31:48.009 READ: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.2MiB (71.5MB), run=2006-2006msec 00:31:48.009 WRITE: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.5MB), run=2006-2006msec 00:31:48.009 16:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:31:48.009 16:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:31:48.009 16:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:31:49.982 16:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:49.982 16:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:31:50.552 16:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:50.552 16:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:52.457 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.458 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.458 rmmod nvme_tcp 00:31:52.458 rmmod nvme_fabrics 00:31:52.717 rmmod nvme_keyring 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2428195 ']' 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2428195 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2428195 ']' 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2428195 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428195 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428195' 00:31:52.717 killing process with pid 2428195 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2428195 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2428195 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.717 16:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.255 00:31:55.255 real 0m29.507s 00:31:55.255 user 2m24.585s 00:31:55.255 sys 0m7.957s 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.255 ************************************ 00:31:55.255 END TEST nvmf_fio_host 00:31:55.255 ************************************ 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.255 ************************************ 00:31:55.255 START TEST nvmf_failover 00:31:55.255 ************************************ 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:31:55.255 * Looking for test storage... 00:31:55.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.255 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.256 --rc genhtml_branch_coverage=1 00:31:55.256 --rc genhtml_function_coverage=1 00:31:55.256 --rc genhtml_legend=1 00:31:55.256 --rc geninfo_all_blocks=1 00:31:55.256 --rc geninfo_unexecuted_blocks=1 00:31:55.256 00:31:55.256 ' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.256 --rc genhtml_branch_coverage=1 00:31:55.256 --rc genhtml_function_coverage=1 00:31:55.256 --rc genhtml_legend=1 00:31:55.256 --rc geninfo_all_blocks=1 00:31:55.256 --rc geninfo_unexecuted_blocks=1 00:31:55.256 00:31:55.256 ' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.256 --rc genhtml_branch_coverage=1 00:31:55.256 --rc genhtml_function_coverage=1 00:31:55.256 --rc genhtml_legend=1 00:31:55.256 --rc geninfo_all_blocks=1 00:31:55.256 --rc geninfo_unexecuted_blocks=1 00:31:55.256 00:31:55.256 ' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:55.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.256 --rc genhtml_branch_coverage=1 00:31:55.256 --rc genhtml_function_coverage=1 00:31:55.256 --rc genhtml_legend=1 00:31:55.256 --rc geninfo_all_blocks=1 00:31:55.256 --rc geninfo_unexecuted_blocks=1 00:31:55.256 00:31:55.256 ' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:55.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.256 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.257 16:58:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:00.527 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:00.527 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:00.527 Found net devices under 0000:31:00.0: cvl_0_0 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:00.527 Found net devices under 0000:31:00.1: cvl_0_1 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:00.527 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:00.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:32:00.528 00:32:00.528 --- 10.0.0.2 ping statistics --- 00:32:00.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.528 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:32:00.528 00:32:00.528 --- 10.0.0.1 ping statistics --- 00:32:00.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.528 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2437917 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2437917 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2437917 ']' 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.528 16:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:00.528 [2024-12-06 16:58:48.854508] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:00.528 [2024-12-06 16:58:48.854557] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.528 [2024-12-06 16:58:48.919492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:00.528 [2024-12-06 16:58:48.936253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.528 [2024-12-06 16:58:48.936287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.528 [2024-12-06 16:58:48.936294] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.528 [2024-12-06 16:58:48.936299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.528 [2024-12-06 16:58:48.936305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.528 [2024-12-06 16:58:48.937476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.528 [2024-12-06 16:58:48.937633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.528 [2024-12-06 16:58:48.937636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:00.528 [2024-12-06 16:58:49.173096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.528 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:00.788 Malloc0 00:32:00.788 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:01.047 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:01.047 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:01.306 [2024-12-06 16:58:49.832735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.306 16:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:01.306 [2024-12-06 16:58:49.993201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:01.566 [2024-12-06 16:58:50.153698] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2438271 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2438271 /var/tmp/bdevperf.sock 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2438271 ']' 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:01.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:01.566 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:01.825 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:01.825 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:01.825 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:02.085 NVMe0n1 00:32:02.085 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:02.345 00:32:02.345 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2438512 00:32:02.345 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:02.345 16:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:03.284 16:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.545 [2024-12-06 16:58:52.085459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.545 [2024-12-06 16:58:52.085558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.546 [2024-12-06 16:58:52.085949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.085996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 [2024-12-06 16:58:52.086051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cccb90 is same with the state(6) to be set 00:32:03.547 16:58:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:06.840 16:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:06.840 00:32:06.840 16:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:06.840 [2024-12-06 16:58:55.495083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 [2024-12-06 16:58:55.495228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccdab0 is same with the state(6) to be set 00:32:06.840 16:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:10.128 16:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.128 [2024-12-06 16:58:58.659633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.128 16:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:11.064 16:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:11.324 [2024-12-06 16:58:59.826997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.324 [2024-12-06 16:58:59.827166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 [2024-12-06 16:58:59.827493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ccf050 is same with the state(6) to be set 00:32:11.325 16:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2438512 00:32:17.898 { 00:32:17.898 "results": [ 00:32:17.898 { 00:32:17.898 "job": "NVMe0n1", 00:32:17.898 "core_mask": "0x1", 00:32:17.899 "workload": "verify", 00:32:17.899 "status": "finished", 00:32:17.899 "verify_range": { 00:32:17.899 "start": 0, 00:32:17.899 "length": 16384 00:32:17.899 }, 00:32:17.899 "queue_depth": 128, 00:32:17.899 "io_size": 4096, 00:32:17.899 "runtime": 15.007999, 00:32:17.899 "iops": 12533.383031275522, 00:32:17.899 "mibps": 48.95852746592001, 00:32:17.899 "io_failed": 11485, 00:32:17.899 "io_timeout": 0, 00:32:17.899 "avg_latency_us": 9604.070694070058, 00:32:17.899 "min_latency_us": 546.1333333333333, 00:32:17.899 "max_latency_us": 15182.506666666666 00:32:17.899 } 00:32:17.899 ], 00:32:17.899 "core_count": 1 00:32:17.899 } 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2438271 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2438271 ']' 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2438271 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2438271 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2438271' 00:32:17.899 killing process with pid 2438271 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2438271 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2438271 00:32:17.899 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:17.899 [2024-12-06 16:58:50.204229] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:17.899 [2024-12-06 16:58:50.204288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2438271 ] 00:32:17.899 [2024-12-06 16:58:50.281244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:17.899 [2024-12-06 16:58:50.299123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.899 Running I/O for 15 seconds... 00:32:17.899 11567.00 IOPS, 45.18 MiB/s [2024-12-06T15:59:06.592Z] [2024-12-06 16:58:52.086909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.086943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.086960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.086968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.086979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.086986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.086996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.899 [2024-12-06 16:58:52.087409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.899 [2024-12-06 16:58:52.087416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.087987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.087996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.088004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.088013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.088021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.088030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.088038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.088048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.088055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.088065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.900 [2024-12-06 16:58:52.088072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.900 [2024-12-06 16:58:52.088082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.901 [2024-12-06 16:58:52.088528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.901 [2024-12-06 16:58:52.088738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.901 [2024-12-06 16:58:52.088745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.088985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.088992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.089008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.089025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.089043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:52.089059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.902 [2024-12-06 16:58:52.089090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100128 len:8 PRP1 0x0 PRP2 0x0 00:32:17.902 [2024-12-06 16:58:52.089098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.902 [2024-12-06 16:58:52.089121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.902 [2024-12-06 16:58:52.089128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100136 len:8 PRP1 0x0 PRP2 0x0 00:32:17.902 [2024-12-06 16:58:52.089136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089175] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:17.902 [2024-12-06 16:58:52.089196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.902 [2024-12-06 16:58:52.089205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.902 [2024-12-06 16:58:52.089220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.902 [2024-12-06 16:58:52.089235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.902 [2024-12-06 16:58:52.089251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:52.089258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:17.902 [2024-12-06 16:58:52.092832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:17.902 [2024-12-06 16:58:52.092857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23efe30 (9): Bad file descriptor 00:32:17.902 [2024-12-06 16:58:52.254134] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:17.902 10655.50 IOPS, 41.62 MiB/s [2024-12-06T15:59:06.595Z] 11207.67 IOPS, 43.78 MiB/s [2024-12-06T15:59:06.595Z] 11643.25 IOPS, 45.48 MiB/s [2024-12-06T15:59:06.595Z] [2024-12-06 16:58:55.495924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.902 [2024-12-06 16:58:55.495953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.495965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.495976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.495983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.495989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.495997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.496002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.496008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.496014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.496021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.496026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.496033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.496038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.496045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.496050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.496057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.496062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.496068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.902 [2024-12-06 16:58:55.496073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.902 [2024-12-06 16:58:55.496080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.903 [2024-12-06 16:58:55.496468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.903 [2024-12-06 16:58:55.496474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.904 [2024-12-06 16:58:55.496508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.904 [2024-12-06 16:58:55.496881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.904 [2024-12-06 16:58:55.496902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:32:17.904 [2024-12-06 16:58:55.496908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.904 [2024-12-06 16:58:55.496920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.904 [2024-12-06 16:58:55.496924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:32:17.904 [2024-12-06 16:58:55.496929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.904 [2024-12-06 16:58:55.496934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.904 [2024-12-06 16:58:55.496938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.904 [2024-12-06 16:58:55.496942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77416 len:8 PRP1 0x0 PRP2 0x0 00:32:17.904 [2024-12-06 16:58:55.496948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.496953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.496957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.496961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.496966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.496972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.496975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.496980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77432 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.496985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.496990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.496994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.496998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77440 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77448 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.905 [2024-12-06 16:58:55.497369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.905 [2024-12-06 16:58:55.497373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:32:17.905 [2024-12-06 16:58:55.497378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.905 [2024-12-06 16:58:55.497383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77648 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497495] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77656 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77664 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76664 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76672 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76680 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76688 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76696 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76704 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76712 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76720 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76728 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76736 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76744 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76752 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76760 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76768 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.906 [2024-12-06 16:58:55.497795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.906 [2024-12-06 16:58:55.497799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76776 len:8 PRP1 0x0 PRP2 0x0 00:32:17.906 [2024-12-06 16:58:55.497804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.906 [2024-12-06 16:58:55.497835] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:17.906 [2024-12-06 16:58:55.497852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.907 [2024-12-06 16:58:55.497858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:55.497864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.907 [2024-12-06 16:58:55.497869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:55.497875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.907 [2024-12-06 16:58:55.497880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:55.497889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.907 [2024-12-06 16:58:55.497896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:55.497904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:17.907 [2024-12-06 16:58:55.497930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23efe30 (9): Bad file descriptor 00:32:17.907 [2024-12-06 16:58:55.500345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:17.907 [2024-12-06 16:58:55.529195] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:17.907 11824.80 IOPS, 46.19 MiB/s [2024-12-06T15:59:06.600Z] 12028.67 IOPS, 46.99 MiB/s [2024-12-06T15:59:06.600Z] 12153.29 IOPS, 47.47 MiB/s [2024-12-06T15:59:06.600Z] 12249.25 IOPS, 47.85 MiB/s [2024-12-06T15:59:06.600Z] [2024-12-06 16:58:59.829530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.907 [2024-12-06 16:58:59.829561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.907 [2024-12-06 16:58:59.829581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.907 [2024-12-06 16:58:59.829593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.907 [2024-12-06 16:58:59.829608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.907 [2024-12-06 16:58:59.829621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.907 [2024-12-06 16:58:59.829634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.907 [2024-12-06 16:58:59.829954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.907 [2024-12-06 16:58:59.829960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.829966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.829972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.829978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.829984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.829990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.829995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:17.908 [2024-12-06 16:58:59.830018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.908 [2024-12-06 16:58:59.830382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.908 [2024-12-06 16:58:59.830387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:17.909 [2024-12-06 16:58:59.830399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13448 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13456 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13464 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13480 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13488 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13496 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13512 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13520 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13528 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830659] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13544 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13552 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13560 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13576 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13584 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13592 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.909 [2024-12-06 16:58:59.830817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13608 len:8 PRP1 0x0 PRP2 0x0 00:32:17.909 [2024-12-06 16:58:59.830823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.909 [2024-12-06 16:58:59.830828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.909 [2024-12-06 16:58:59.830831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13616 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13624 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13640 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13648 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13656 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13672 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.830985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13680 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.830990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.830995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.830999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13688 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13704 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13712 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13720 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13736 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13744 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13752 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13768 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13776 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13784 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:8 PRP1 0x0 PRP2 0x0 00:32:17.910 [2024-12-06 16:58:59.831253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.910 [2024-12-06 16:58:59.831258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.910 [2024-12-06 16:58:59.831262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.910 [2024-12-06 16:58:59.831266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13800 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.831271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.831277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.831280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.831284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13808 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.831289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.831294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13816 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13832 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13840 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13848 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13864 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13872 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13880 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:17.911 [2024-12-06 16:58:59.835498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:17.911 [2024-12-06 16:58:59.835506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13896 len:8 PRP1 0x0 PRP2 0x0 00:32:17.911 [2024-12-06 16:58:59.835515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835566] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:17.911 [2024-12-06 16:58:59.835604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.911 [2024-12-06 16:58:59.835620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.911 [2024-12-06 16:58:59.835649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.911 [2024-12-06 16:58:59.835668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:17.911 [2024-12-06 16:58:59.835687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:17.911 [2024-12-06 16:58:59.835697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:17.911 [2024-12-06 16:58:59.835746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23efe30 (9): Bad file descriptor 00:32:17.911 [2024-12-06 16:58:59.838401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:17.911 [2024-12-06 16:58:59.901970] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:17.911 12232.89 IOPS, 47.78 MiB/s [2024-12-06T15:59:06.604Z] 12298.70 IOPS, 48.04 MiB/s [2024-12-06T15:59:06.604Z] 12356.00 IOPS, 48.27 MiB/s [2024-12-06T15:59:06.604Z] 12406.67 IOPS, 48.46 MiB/s [2024-12-06T15:59:06.604Z] 12461.08 IOPS, 48.68 MiB/s [2024-12-06T15:59:06.604Z] 12505.57 IOPS, 48.85 MiB/s [2024-12-06T15:59:06.604Z] 12533.00 IOPS, 48.96 MiB/s 00:32:17.911 Latency(us) 00:32:17.911 [2024-12-06T15:59:06.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.911 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:17.911 Verification LBA range: start 0x0 length 0x4000 00:32:17.911 NVMe0n1 : 15.01 12533.38 48.96 765.26 0.00 9604.07 546.13 15182.51 00:32:17.911 [2024-12-06T15:59:06.604Z] =================================================================================================================== 00:32:17.911 [2024-12-06T15:59:06.604Z] Total : 12533.38 48.96 765.26 0.00 9604.07 546.13 15182.51 00:32:17.911 Received shutdown signal, test time was about 15.000000 seconds 00:32:17.911 00:32:17.911 Latency(us) 00:32:17.911 [2024-12-06T15:59:06.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:17.911 [2024-12-06T15:59:06.604Z] =================================================================================================================== 00:32:17.911 [2024-12-06T15:59:06.604Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2441735 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2441735 /var/tmp/bdevperf.sock 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2441735 ']' 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:17.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:17.911 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:17.911 [2024-12-06 16:59:06.537580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:17.912 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:18.170 [2024-12-06 16:59:06.697924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:18.170 16:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.429 NVMe0n1 00:32:18.429 16:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.688 00:32:18.688 16:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:18.947 00:32:18.947 16:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:18.947 16:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:19.204 16:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:19.462 16:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:22.750 16:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:22.750 16:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:22.750 16:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2442882 00:32:22.750 16:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:22.750 16:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2442882 00:32:23.687 { 00:32:23.687 "results": [ 00:32:23.687 { 00:32:23.687 "job": "NVMe0n1", 00:32:23.687 "core_mask": "0x1", 00:32:23.687 "workload": "verify", 00:32:23.687 "status": "finished", 00:32:23.687 "verify_range": { 00:32:23.687 "start": 0, 00:32:23.687 "length": 16384 00:32:23.687 }, 00:32:23.687 "queue_depth": 128, 00:32:23.687 "io_size": 4096, 00:32:23.687 "runtime": 1.006001, 00:32:23.687 "iops": 12933.386746136435, 00:32:23.687 "mibps": 50.52104197709545, 00:32:23.687 "io_failed": 0, 00:32:23.687 "io_timeout": 0, 00:32:23.687 "avg_latency_us": 9863.252054415494, 00:32:23.687 "min_latency_us": 1911.4666666666667, 00:32:23.687 "max_latency_us": 12506.453333333333 00:32:23.687 } 00:32:23.687 ], 00:32:23.687 "core_count": 1 00:32:23.687 } 00:32:23.687 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:23.687 [2024-12-06 16:59:06.252056] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:23.687 [2024-12-06 16:59:06.252120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2441735 ] 00:32:23.687 [2024-12-06 16:59:06.316684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.687 [2024-12-06 16:59:06.331552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.687 [2024-12-06 16:59:07.884463] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:23.687 [2024-12-06 16:59:07.884498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.687 [2024-12-06 16:59:07.884507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.687 [2024-12-06 16:59:07.884514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.687 [2024-12-06 16:59:07.884519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.687 [2024-12-06 16:59:07.884525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.687 [2024-12-06 16:59:07.884530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.687 [2024-12-06 16:59:07.884536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:23.687 [2024-12-06 16:59:07.884541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:23.687 [2024-12-06 16:59:07.884547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:23.687 [2024-12-06 16:59:07.884567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:23.687 [2024-12-06 16:59:07.884578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3ae30 (9): Bad file descriptor 00:32:23.687 [2024-12-06 16:59:07.937410] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:23.687 Running I/O for 1 seconds... 00:32:23.687 12883.00 IOPS, 50.32 MiB/s 00:32:23.687 Latency(us) 00:32:23.687 [2024-12-06T15:59:12.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.687 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:23.687 Verification LBA range: start 0x0 length 0x4000 00:32:23.687 NVMe0n1 : 1.01 12933.39 50.52 0.00 0.00 9863.25 1911.47 12506.45 00:32:23.687 [2024-12-06T15:59:12.380Z] =================================================================================================================== 00:32:23.687 [2024-12-06T15:59:12.380Z] Total : 12933.39 50.52 0.00 0.00 9863.25 1911.47 12506.45 00:32:23.687 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:23.687 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:23.687 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:23.947 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:23.947 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:24.206 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:24.206 16:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2441735 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2441735 ']' 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2441735 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.518 16:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2441735 00:32:27.518 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.518 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.518 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2441735' 00:32:27.518 killing process with pid 2441735 00:32:27.518 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2441735 00:32:27.518 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2441735 00:32:27.518 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:27.518 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:27.778 rmmod nvme_tcp 00:32:27.778 rmmod nvme_fabrics 00:32:27.778 rmmod nvme_keyring 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2437917 ']' 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2437917 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2437917 ']' 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2437917 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437917 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437917' 00:32:27.778 killing process with pid 2437917 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2437917 00:32:27.778 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2437917 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.038 16:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:29.944 00:32:29.944 real 0m35.120s 00:32:29.944 user 1m52.105s 00:32:29.944 sys 0m6.611s 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.944 ************************************ 00:32:29.944 END TEST nvmf_failover 00:32:29.944 ************************************ 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.944 ************************************ 00:32:29.944 START TEST nvmf_host_discovery 00:32:29.944 ************************************ 00:32:29.944 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:30.204 * Looking for test storage... 00:32:30.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.204 --rc genhtml_branch_coverage=1 00:32:30.204 --rc genhtml_function_coverage=1 00:32:30.204 --rc genhtml_legend=1 00:32:30.204 --rc geninfo_all_blocks=1 00:32:30.204 --rc geninfo_unexecuted_blocks=1 00:32:30.204 00:32:30.204 ' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.204 --rc genhtml_branch_coverage=1 00:32:30.204 --rc genhtml_function_coverage=1 00:32:30.204 --rc genhtml_legend=1 00:32:30.204 --rc geninfo_all_blocks=1 00:32:30.204 --rc geninfo_unexecuted_blocks=1 00:32:30.204 00:32:30.204 ' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.204 --rc genhtml_branch_coverage=1 00:32:30.204 --rc genhtml_function_coverage=1 00:32:30.204 --rc genhtml_legend=1 00:32:30.204 --rc geninfo_all_blocks=1 00:32:30.204 --rc geninfo_unexecuted_blocks=1 00:32:30.204 00:32:30.204 ' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:30.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.204 --rc genhtml_branch_coverage=1 00:32:30.204 --rc genhtml_function_coverage=1 00:32:30.204 --rc genhtml_legend=1 00:32:30.204 --rc geninfo_all_blocks=1 00:32:30.204 --rc geninfo_unexecuted_blocks=1 00:32:30.204 00:32:30.204 ' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.204 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.205 16:59:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:35.488 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:35.488 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:35.488 Found net devices under 0000:31:00.0: cvl_0_0 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:35.488 Found net devices under 0000:31:00.1: cvl_0_1 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.488 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.489 16:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:35.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:32:35.489 00:32:35.489 --- 10.0.0.2 ping statistics --- 00:32:35.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.489 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:32:35.489 00:32:35.489 --- 10.0.0.1 ping statistics --- 00:32:35.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.489 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2448283 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2448283 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2448283 ']' 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.489 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.489 [2024-12-06 16:59:24.074722] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:35.489 [2024-12-06 16:59:24.074770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.489 [2024-12-06 16:59:24.144802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.489 [2024-12-06 16:59:24.159878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.489 [2024-12-06 16:59:24.159903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.489 [2024-12-06 16:59:24.159909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.489 [2024-12-06 16:59:24.159914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.489 [2024-12-06 16:59:24.159918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.489 [2024-12-06 16:59:24.160404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.750 [2024-12-06 16:59:24.257655] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.750 [2024-12-06 16:59:24.265822] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.750 null0 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.750 null1 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2448308 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2448308 /tmp/host.sock 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2448308 ']' 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:35.750 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:35.750 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:35.751 [2024-12-06 16:59:24.328808] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:35.751 [2024-12-06 16:59:24.328853] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2448308 ] 00:32:35.751 [2024-12-06 16:59:24.405555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.751 [2024-12-06 16:59:24.423778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.010 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.011 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.271 [2024-12-06 16:59:24.726948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.271 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:36.272 16:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:37.210 [2024-12-06 16:59:25.565301] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:37.210 [2024-12-06 16:59:25.565321] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:37.210 [2024-12-06 16:59:25.565336] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.210 [2024-12-06 16:59:25.651580] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:37.210 [2024-12-06 16:59:25.833685] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:37.210 [2024-12-06 16:59:25.834736] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb05d90:1 started. 00:32:37.210 [2024-12-06 16:59:25.836373] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:37.210 [2024-12-06 16:59:25.836391] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.211 [2024-12-06 16:59:25.883858] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb05d90 was disconnected and freed. delete nvme_qpair. 00:32:37.211 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.470 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.471 [2024-12-06 16:59:26.007567] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb06340:1 started. 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 [2024-12-06 16:59:26.013542] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb06340 was disconnected and freed. delete nvme_qpair. 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 [2024-12-06 16:59:26.074618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:37.471 [2024-12-06 16:59:26.075583] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:37.471 [2024-12-06 16:59:26.075604] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:37.471 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:37.472 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.472 [2024-12-06 16:59:26.161850] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:37.731 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:37.731 16:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:37.731 [2024-12-06 16:59:26.260699] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:37.731 [2024-12-06 16:59:26.260738] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:37.731 [2024-12-06 16:59:26.260747] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:37.731 [2024-12-06 16:59:26.260752] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.670 [2024-12-06 16:59:27.250405] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:38.670 [2024-12-06 16:59:27.250423] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.670 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:38.670 [2024-12-06 16:59:27.257230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.670 [2024-12-06 16:59:27.257247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.670 [2024-12-06 16:59:27.257253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.670 [2024-12-06 16:59:27.257259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.671 [2024-12-06 16:59:27.257269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.671 [2024-12-06 16:59:27.257274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.671 [2024-12-06 16:59:27.257279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.671 [2024-12-06 16:59:27.257285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.671 [2024-12-06 16:59:27.257290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:38.671 [2024-12-06 16:59:27.267245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.671 [2024-12-06 16:59:27.277278] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.671 [2024-12-06 16:59:27.277286] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.671 [2024-12-06 16:59:27.277291] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.671 [2024-12-06 16:59:27.277295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.671 [2024-12-06 16:59:27.277309] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.671 [2024-12-06 16:59:27.277637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.671 [2024-12-06 16:59:27.277648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.671 [2024-12-06 16:59:27.277654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.671 [2024-12-06 16:59:27.277662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.671 [2024-12-06 16:59:27.277675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.671 [2024-12-06 16:59:27.277680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.671 [2024-12-06 16:59:27.277687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.671 [2024-12-06 16:59:27.277692] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.671 [2024-12-06 16:59:27.277696] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.671 [2024-12-06 16:59:27.277699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.671 [2024-12-06 16:59:27.287338] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.671 [2024-12-06 16:59:27.287346] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.671 [2024-12-06 16:59:27.287352] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.671 [2024-12-06 16:59:27.287355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.671 [2024-12-06 16:59:27.287365] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.671 [2024-12-06 16:59:27.287561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.671 [2024-12-06 16:59:27.287573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.671 [2024-12-06 16:59:27.287579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.671 [2024-12-06 16:59:27.287586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.671 [2024-12-06 16:59:27.287594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.671 [2024-12-06 16:59:27.287599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.671 [2024-12-06 16:59:27.287605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.671 [2024-12-06 16:59:27.287609] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.671 [2024-12-06 16:59:27.287612] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.671 [2024-12-06 16:59:27.287616] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:38.671 [2024-12-06 16:59:27.297394] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.671 [2024-12-06 16:59:27.297405] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.671 [2024-12-06 16:59:27.297409] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.671 [2024-12-06 16:59:27.297412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.671 [2024-12-06 16:59:27.297424] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.671 [2024-12-06 16:59:27.297749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.671 [2024-12-06 16:59:27.297758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.671 [2024-12-06 16:59:27.297764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.671 [2024-12-06 16:59:27.297771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.671 [2024-12-06 16:59:27.297779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.671 [2024-12-06 16:59:27.297783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.671 [2024-12-06 16:59:27.297788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.671 [2024-12-06 16:59:27.297792] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.671 [2024-12-06 16:59:27.297796] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.671 [2024-12-06 16:59:27.297799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.671 [2024-12-06 16:59:27.307452] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.671 [2024-12-06 16:59:27.307461] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.671 [2024-12-06 16:59:27.307465] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.671 [2024-12-06 16:59:27.307468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.671 [2024-12-06 16:59:27.307477] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.671 [2024-12-06 16:59:27.307705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.671 [2024-12-06 16:59:27.307714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.671 [2024-12-06 16:59:27.307719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.671 [2024-12-06 16:59:27.307727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.671 [2024-12-06 16:59:27.307734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.671 [2024-12-06 16:59:27.307739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.671 [2024-12-06 16:59:27.307744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.671 [2024-12-06 16:59:27.307748] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.671 [2024-12-06 16:59:27.307752] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.671 [2024-12-06 16:59:27.307754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.671 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.671 [2024-12-06 16:59:27.317506] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.671 [2024-12-06 16:59:27.317515] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.671 [2024-12-06 16:59:27.317518] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.317521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.672 [2024-12-06 16:59:27.317535] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.317787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.672 [2024-12-06 16:59:27.317798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.672 [2024-12-06 16:59:27.317803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.672 [2024-12-06 16:59:27.317811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.672 [2024-12-06 16:59:27.317819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.672 [2024-12-06 16:59:27.317824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.672 [2024-12-06 16:59:27.317829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.672 [2024-12-06 16:59:27.317833] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.672 [2024-12-06 16:59:27.317837] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.672 [2024-12-06 16:59:27.317841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:38.672 [2024-12-06 16:59:27.327563] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.672 [2024-12-06 16:59:27.327571] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.672 [2024-12-06 16:59:27.327574] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.327577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.672 [2024-12-06 16:59:27.327587] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.327869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.672 [2024-12-06 16:59:27.327882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.672 [2024-12-06 16:59:27.327887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.672 [2024-12-06 16:59:27.327895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.672 [2024-12-06 16:59:27.327902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.672 [2024-12-06 16:59:27.327907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.672 [2024-12-06 16:59:27.327912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.672 [2024-12-06 16:59:27.327916] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.672 [2024-12-06 16:59:27.327919] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.672 [2024-12-06 16:59:27.327922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.672 [2024-12-06 16:59:27.337615] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.672 [2024-12-06 16:59:27.337625] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.672 [2024-12-06 16:59:27.337628] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.337631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.672 [2024-12-06 16:59:27.337641] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.337924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.672 [2024-12-06 16:59:27.337932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.672 [2024-12-06 16:59:27.337937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.672 [2024-12-06 16:59:27.337945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.672 [2024-12-06 16:59:27.337952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.672 [2024-12-06 16:59:27.337957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.672 [2024-12-06 16:59:27.337962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.672 [2024-12-06 16:59:27.337966] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.672 [2024-12-06 16:59:27.337970] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.672 [2024-12-06 16:59:27.337973] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.672 [2024-12-06 16:59:27.347670] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.672 [2024-12-06 16:59:27.347678] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.672 [2024-12-06 16:59:27.347682] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.347685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.672 [2024-12-06 16:59:27.347697] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.347887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.672 [2024-12-06 16:59:27.347896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.672 [2024-12-06 16:59:27.347901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.672 [2024-12-06 16:59:27.347909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.672 [2024-12-06 16:59:27.347916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.672 [2024-12-06 16:59:27.347921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.672 [2024-12-06 16:59:27.347926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.672 [2024-12-06 16:59:27.347930] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.672 [2024-12-06 16:59:27.347933] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.672 [2024-12-06 16:59:27.347936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:32:38.672 16:59:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:38.672 [2024-12-06 16:59:27.357726] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.672 [2024-12-06 16:59:27.357734] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.672 [2024-12-06 16:59:27.357737] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.357740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.672 [2024-12-06 16:59:27.357750] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.672 [2024-12-06 16:59:27.358030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.672 [2024-12-06 16:59:27.358038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.672 [2024-12-06 16:59:27.358043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.672 [2024-12-06 16:59:27.358050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.672 [2024-12-06 16:59:27.358057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.672 [2024-12-06 16:59:27.358061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.672 [2024-12-06 16:59:27.358066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.672 [2024-12-06 16:59:27.358071] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.672 [2024-12-06 16:59:27.358074] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.672 [2024-12-06 16:59:27.358077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.931 [2024-12-06 16:59:27.367778] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.931 [2024-12-06 16:59:27.367786] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.931 [2024-12-06 16:59:27.367793] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.931 [2024-12-06 16:59:27.367796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.931 [2024-12-06 16:59:27.367805] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.931 [2024-12-06 16:59:27.367980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.931 [2024-12-06 16:59:27.367988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.931 [2024-12-06 16:59:27.367993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.931 [2024-12-06 16:59:27.368001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.931 [2024-12-06 16:59:27.368007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.931 [2024-12-06 16:59:27.368012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.931 [2024-12-06 16:59:27.368017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.931 [2024-12-06 16:59:27.368021] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.931 [2024-12-06 16:59:27.368024] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.931 [2024-12-06 16:59:27.368027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:38.931 [2024-12-06 16:59:27.377834] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:38.931 [2024-12-06 16:59:27.377842] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:38.931 [2024-12-06 16:59:27.377845] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:38.931 [2024-12-06 16:59:27.377848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.931 [2024-12-06 16:59:27.377857] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:38.931 [2024-12-06 16:59:27.378146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:38.931 [2024-12-06 16:59:27.378154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5cd0 with addr=10.0.0.2, port=4420 00:32:38.931 [2024-12-06 16:59:27.378159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad5cd0 is same with the state(6) to be set 00:32:38.931 [2024-12-06 16:59:27.378167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad5cd0 (9): Bad file descriptor 00:32:38.931 [2024-12-06 16:59:27.378187] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:32:38.931 [2024-12-06 16:59:27.378198] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:38.931 [2024-12-06 16:59:27.378211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:38.931 [2024-12-06 16:59:27.378216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:38.931 [2024-12-06 16:59:27.378221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:38.931 [2024-12-06 16:59:27.378225] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:38.931 [2024-12-06 16:59:27.378229] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:38.931 [2024-12-06 16:59:27.378234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.870 16:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.254 [2024-12-06 16:59:29.583477] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:41.254 [2024-12-06 16:59:29.583491] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:41.254 [2024-12-06 16:59:29.583501] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:41.254 [2024-12-06 16:59:29.669739] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:32:41.254 [2024-12-06 16:59:29.849727] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:32:41.254 [2024-12-06 16:59:29.850538] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xaee060:1 started. 00:32:41.254 [2024-12-06 16:59:29.851886] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:41.254 [2024-12-06 16:59:29.851909] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:32:41.254 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.254 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.254 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:41.254 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.254 [2024-12-06 16:59:29.853793] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xaee060 was disconnected and freed. delete nvme_qpair. 00:32:41.254 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.255 request: 00:32:41.255 { 00:32:41.255 "name": "nvme", 00:32:41.255 "trtype": "tcp", 00:32:41.255 "traddr": "10.0.0.2", 00:32:41.255 "adrfam": "ipv4", 00:32:41.255 "trsvcid": "8009", 00:32:41.255 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:41.255 "wait_for_attach": true, 00:32:41.255 "method": "bdev_nvme_start_discovery", 00:32:41.255 "req_id": 1 00:32:41.255 } 00:32:41.255 Got JSON-RPC error response 00:32:41.255 response: 00:32:41.255 { 00:32:41.255 "code": -17, 00:32:41.255 "message": "File exists" 00:32:41.255 } 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.255 request: 00:32:41.255 { 00:32:41.255 "name": "nvme_second", 00:32:41.255 "trtype": "tcp", 00:32:41.255 "traddr": "10.0.0.2", 00:32:41.255 "adrfam": "ipv4", 00:32:41.255 "trsvcid": "8009", 00:32:41.255 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:41.255 "wait_for_attach": true, 00:32:41.255 "method": "bdev_nvme_start_discovery", 00:32:41.255 "req_id": 1 00:32:41.255 } 00:32:41.255 Got JSON-RPC error response 00:32:41.255 response: 00:32:41.255 { 00:32:41.255 "code": -17, 00:32:41.255 "message": "File exists" 00:32:41.255 } 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:41.255 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:41.514 16:59:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.514 16:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:42.450 [2024-12-06 16:59:31.015085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:42.450 [2024-12-06 16:59:31.015114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5770 with addr=10.0.0.2, port=8010 00:32:42.450 [2024-12-06 16:59:31.015127] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:42.450 [2024-12-06 16:59:31.015133] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:42.450 [2024-12-06 16:59:31.015138] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:43.385 [2024-12-06 16:59:32.017492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:32:43.385 [2024-12-06 16:59:32.017523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xad5770 with addr=10.0.0.2, port=8010 00:32:43.385 [2024-12-06 16:59:32.017535] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:32:43.385 [2024-12-06 16:59:32.017540] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:43.385 [2024-12-06 16:59:32.017545] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:32:44.760 [2024-12-06 16:59:33.019426] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:32:44.760 request: 00:32:44.760 { 00:32:44.760 "name": "nvme_second", 00:32:44.760 "trtype": "tcp", 00:32:44.760 "traddr": "10.0.0.2", 00:32:44.760 "adrfam": "ipv4", 00:32:44.760 "trsvcid": "8010", 00:32:44.760 "hostnqn": "nqn.2021-12.io.spdk:test", 00:32:44.760 "wait_for_attach": false, 00:32:44.760 "attach_timeout_ms": 3000, 00:32:44.760 "method": "bdev_nvme_start_discovery", 00:32:44.760 "req_id": 1 00:32:44.760 } 00:32:44.760 Got JSON-RPC error response 00:32:44.760 response: 00:32:44.760 { 00:32:44.760 "code": -110, 00:32:44.760 "message": "Connection timed out" 00:32:44.760 } 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2448308 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:44.760 rmmod nvme_tcp 00:32:44.760 rmmod nvme_fabrics 00:32:44.760 rmmod nvme_keyring 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2448283 ']' 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2448283 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2448283 ']' 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2448283 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2448283 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2448283' 00:32:44.760 killing process with pid 2448283 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2448283 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2448283 00:32:44.760 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.761 16:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.667 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:46.667 00:32:46.667 real 0m16.729s 00:32:46.667 user 0m20.230s 00:32:46.667 sys 0m5.076s 00:32:46.667 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:46.667 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:46.667 ************************************ 00:32:46.667 END TEST nvmf_host_discovery 00:32:46.667 ************************************ 00:32:46.667 16:59:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:46.667 16:59:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:46.667 16:59:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:46.667 16:59:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.925 ************************************ 00:32:46.925 START TEST nvmf_host_multipath_status 00:32:46.925 ************************************ 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:32:46.925 * Looking for test storage... 00:32:46.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:32:46.925 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:46.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.926 --rc genhtml_branch_coverage=1 00:32:46.926 --rc genhtml_function_coverage=1 00:32:46.926 --rc genhtml_legend=1 00:32:46.926 --rc geninfo_all_blocks=1 00:32:46.926 --rc geninfo_unexecuted_blocks=1 00:32:46.926 00:32:46.926 ' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:46.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.926 --rc genhtml_branch_coverage=1 00:32:46.926 --rc genhtml_function_coverage=1 00:32:46.926 --rc genhtml_legend=1 00:32:46.926 --rc geninfo_all_blocks=1 00:32:46.926 --rc geninfo_unexecuted_blocks=1 00:32:46.926 00:32:46.926 ' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:46.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.926 --rc genhtml_branch_coverage=1 00:32:46.926 --rc genhtml_function_coverage=1 00:32:46.926 --rc genhtml_legend=1 00:32:46.926 --rc geninfo_all_blocks=1 00:32:46.926 --rc geninfo_unexecuted_blocks=1 00:32:46.926 00:32:46.926 ' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:46.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:46.926 --rc genhtml_branch_coverage=1 00:32:46.926 --rc genhtml_function_coverage=1 00:32:46.926 --rc genhtml_legend=1 00:32:46.926 --rc geninfo_all_blocks=1 00:32:46.926 --rc geninfo_unexecuted_blocks=1 00:32:46.926 00:32:46.926 ' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:46.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:32:46.926 16:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:52.285 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:52.285 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:52.285 Found net devices under 0000:31:00.0: cvl_0_0 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:52.285 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:52.286 Found net devices under 0000:31:00.1: cvl_0_1 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:52.286 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:52.545 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:52.545 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:52.545 16:59:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:52.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:52.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:32:52.545 00:32:52.545 --- 10.0.0.2 ping statistics --- 00:32:52.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.545 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:52.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:52.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:32:52.545 00:32:52.545 --- 10.0.0.1 ping statistics --- 00:32:52.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:52.545 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2454816 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2454816 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2454816 ']' 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:32:52.545 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.545 [2024-12-06 16:59:41.164491] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:32:52.545 [2024-12-06 16:59:41.164541] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.804 [2024-12-06 16:59:41.248286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:52.804 [2024-12-06 16:59:41.265931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.804 [2024-12-06 16:59:41.265964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.804 [2024-12-06 16:59:41.265972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.804 [2024-12-06 16:59:41.265979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.804 [2024-12-06 16:59:41.265985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.804 [2024-12-06 16:59:41.267131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.804 [2024-12-06 16:59:41.267150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2454816 00:32:52.804 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:53.063 [2024-12-06 16:59:41.505787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:53.063 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:53.063 Malloc0 00:32:53.063 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:32:53.323 16:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:53.583 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:53.583 [2024-12-06 16:59:42.185990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:53.583 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:53.843 [2024-12-06 16:59:42.362601] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2455033 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2455033 /var/tmp/bdevperf.sock 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2455033 ']' 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:53.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:53.843 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:54.102 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.102 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:32:54.102 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:32:54.360 16:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:54.618 Nvme0n1 00:32:54.618 16:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:32:55.185 Nvme0n1 00:32:55.185 16:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:32:55.185 16:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:32:57.110 16:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:32:57.110 16:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:32:57.368 16:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:57.368 16:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:32:58.302 16:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:32:58.302 16:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:58.302 16:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:58.302 16:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.561 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.561 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:58.561 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.561 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:58.820 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.078 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.078 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:59.078 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.078 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:59.335 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.335 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:59.335 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:59.335 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:59.335 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:59.335 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:32:59.335 16:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:59.592 16:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:32:59.592 16:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:00.969 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.227 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.227 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:01.227 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:01.228 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.486 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.486 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:01.486 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:01.486 16:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.486 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.486 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:01.486 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:01.486 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:01.744 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:01.744 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:01.744 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:01.744 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:02.002 16:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:02.937 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:02.937 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:02.937 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:02.937 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:03.197 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.197 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:03.197 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.197 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:03.456 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:03.456 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:03.456 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.456 16:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:03.456 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.456 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:03.456 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.456 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:03.713 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.714 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:03.714 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.714 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:03.973 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.973 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:03.973 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:03.973 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:03.973 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:03.973 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:03.973 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:04.231 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:04.231 16:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:05.608 16:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:05.608 16:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:05.608 16:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.608 16:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.608 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:05.868 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.126 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:06.126 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:06.126 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:06.126 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:06.385 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:06.385 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:06.385 16:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:06.385 16:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:06.642 16:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:07.574 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:07.574 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:07.574 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:07.574 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:07.834 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:07.834 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:07.834 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:07.834 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.092 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:08.351 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:08.351 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:08.351 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.351 16:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:08.351 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.351 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:08.351 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:08.351 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:08.610 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:08.610 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:08.610 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:08.868 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:08.868 16:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:09.806 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:09.806 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:09.806 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:09.806 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:10.065 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.065 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:10.065 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.065 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:10.322 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.323 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:10.323 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.323 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:10.323 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.323 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:10.323 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.323 16:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:10.580 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.580 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:10.580 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.580 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:10.837 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:10.837 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:10.837 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:10.837 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:10.837 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:10.837 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:11.094 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:11.094 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:11.351 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:11.351 16:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:12.283 17:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:12.283 17:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:12.284 17:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.284 17:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:12.542 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.542 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:12.542 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.542 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:12.800 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:13.057 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.057 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:13.057 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.057 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:13.316 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.316 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:13.316 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:13.316 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:13.316 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:13.316 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:13.316 17:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:13.573 17:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:13.831 17:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:14.764 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:15.021 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.021 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:15.021 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.021 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.278 17:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:15.535 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.535 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:15.535 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:15.535 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:15.793 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:15.793 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:15.793 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:15.793 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:16.052 17:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:16.985 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:16.985 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:16.985 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:16.985 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:17.244 17:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.502 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.502 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:17.502 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.502 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:17.760 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:18.018 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:18.018 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:18.018 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:18.276 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:18.276 17:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:19.210 17:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:19.469 17:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:19.469 17:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.469 17:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:19.469 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.469 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:19.469 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.469 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:19.728 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:19.987 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:19.987 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:19.987 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:19.987 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2455033 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2455033 ']' 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2455033 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455033 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455033' 00:33:20.247 killing process with pid 2455033 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2455033 00:33:20.247 17:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2455033 00:33:20.510 { 00:33:20.510 "results": [ 00:33:20.510 { 00:33:20.510 "job": "Nvme0n1", 00:33:20.510 "core_mask": "0x4", 00:33:20.510 "workload": "verify", 00:33:20.510 "status": "terminated", 00:33:20.510 "verify_range": { 00:33:20.510 "start": 0, 00:33:20.510 "length": 16384 00:33:20.510 }, 00:33:20.510 "queue_depth": 128, 00:33:20.510 "io_size": 4096, 00:33:20.510 "runtime": 25.158908, 00:33:20.510 "iops": 12056.604364545552, 00:33:20.510 "mibps": 47.09611079900606, 00:33:20.510 "io_failed": 0, 00:33:20.510 "io_timeout": 0, 00:33:20.510 "avg_latency_us": 10598.332241061195, 00:33:20.510 "min_latency_us": 315.73333333333335, 00:33:20.510 "max_latency_us": 3019898.88 00:33:20.510 } 00:33:20.510 ], 00:33:20.510 "core_count": 1 00:33:20.510 } 00:33:20.510 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2455033 00:33:20.510 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:20.510 [2024-12-06 16:59:42.421815] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:33:20.510 [2024-12-06 16:59:42.421896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2455033 ] 00:33:20.510 [2024-12-06 16:59:42.506737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.510 [2024-12-06 16:59:42.534108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.510 Running I/O for 90 seconds... 00:33:20.510 10860.00 IOPS, 42.42 MiB/s [2024-12-06T16:00:09.203Z] 11952.00 IOPS, 46.69 MiB/s [2024-12-06T16:00:09.203Z] 12287.67 IOPS, 48.00 MiB/s [2024-12-06T16:00:09.203Z] 12486.00 IOPS, 48.77 MiB/s [2024-12-06T16:00:09.203Z] 12606.40 IOPS, 49.24 MiB/s [2024-12-06T16:00:09.203Z] 12676.00 IOPS, 49.52 MiB/s [2024-12-06T16:00:09.203Z] 12730.57 IOPS, 49.73 MiB/s [2024-12-06T16:00:09.203Z] 12750.50 IOPS, 49.81 MiB/s [2024-12-06T16:00:09.203Z] 12776.33 IOPS, 49.91 MiB/s [2024-12-06T16:00:09.203Z] 12807.10 IOPS, 50.03 MiB/s [2024-12-06T16:00:09.203Z] 12829.27 IOPS, 50.11 MiB/s [2024-12-06T16:00:09.203Z] [2024-12-06 16:59:55.030539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.510 [2024-12-06 16:59:55.030573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:20.510 [2024-12-06 16:59:55.030604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.510 [2024-12-06 16:59:55.030611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:20.510 [2024-12-06 16:59:55.030622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.510 [2024-12-06 16:59:55.030628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.510 [2024-12-06 16:59:55.030639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.510 [2024-12-06 16:59:55.030644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:20.510 [2024-12-06 16:59:55.030655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.510 [2024-12-06 16:59:55.030660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:20.510 [2024-12-06 16:59:55.030671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.510 [2024-12-06 16:59:55.030676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:20.510 [2024-12-06 16:59:55.030686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.510 [2024-12-06 16:59:55.030691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.030702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.030707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.031962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.031974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.031989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:20.511 [2024-12-06 16:59:55.032441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.511 [2024-12-06 16:59:55.032446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.032888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.512 [2024-12-06 16:59:55.032909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.512 [2024-12-06 16:59:55.032929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.512 [2024-12-06 16:59:55.032950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.512 [2024-12-06 16:59:55.032972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.032988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.512 [2024-12-06 16:59:55.032997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.033012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.512 [2024-12-06 16:59:55.033018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.033033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.512 [2024-12-06 16:59:55.033039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.033054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.033059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.033075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.033080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:20.512 [2024-12-06 16:59:55.033096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.512 [2024-12-06 16:59:55.033104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:109832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:109864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 16:59:55.033429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 16:59:55.033435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:20.513 12061.00 IOPS, 47.11 MiB/s [2024-12-06T16:00:09.206Z] 11133.23 IOPS, 43.49 MiB/s [2024-12-06T16:00:09.206Z] 10338.00 IOPS, 40.38 MiB/s [2024-12-06T16:00:09.206Z] 10277.73 IOPS, 40.15 MiB/s [2024-12-06T16:00:09.206Z] 10442.81 IOPS, 40.79 MiB/s [2024-12-06T16:00:09.206Z] 10803.53 IOPS, 42.20 MiB/s [2024-12-06T16:00:09.206Z] 11156.28 IOPS, 43.58 MiB/s [2024-12-06T16:00:09.206Z] 11334.11 IOPS, 44.27 MiB/s [2024-12-06T16:00:09.206Z] 11415.95 IOPS, 44.59 MiB/s [2024-12-06T16:00:09.206Z] 11505.76 IOPS, 44.94 MiB/s [2024-12-06T16:00:09.206Z] 11750.23 IOPS, 45.90 MiB/s [2024-12-06T16:00:09.206Z] 11962.74 IOPS, 46.73 MiB/s [2024-12-06T16:00:09.206Z] [2024-12-06 17:00:06.886528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.886561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.886589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.886600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.886611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.886616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.513 [2024-12-06 17:00:06.887218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.887234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.887249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.887265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.887280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.887969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.887986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.887997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.513 [2024-12-06 17:00:06.888136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:20.513 [2024-12-06 17:00:06.888147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.888461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.888466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:20.514 [2024-12-06 17:00:06.889424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.514 [2024-12-06 17:00:06.889429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:20.514 12020.50 IOPS, 46.96 MiB/s [2024-12-06T16:00:09.207Z] 12052.36 IOPS, 47.08 MiB/s [2024-12-06T16:00:09.207Z] Received shutdown signal, test time was about 25.159519 seconds 00:33:20.514 00:33:20.514 Latency(us) 00:33:20.514 [2024-12-06T16:00:09.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.514 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:20.514 Verification LBA range: start 0x0 length 0x4000 00:33:20.514 Nvme0n1 : 25.16 12056.60 47.10 0.00 0.00 10598.33 315.73 3019898.88 00:33:20.514 [2024-12-06T16:00:09.207Z] =================================================================================================================== 00:33:20.514 [2024-12-06T16:00:09.207Z] Total : 12056.60 47.10 0.00 0.00 10598.33 315.73 3019898.88 00:33:20.515 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:20.515 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:20.515 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:20.515 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:20.515 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.515 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.775 rmmod nvme_tcp 00:33:20.775 rmmod nvme_fabrics 00:33:20.775 rmmod nvme_keyring 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2454816 ']' 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2454816 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2454816 ']' 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2454816 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2454816 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2454816' 00:33:20.775 killing process with pid 2454816 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2454816 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2454816 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.775 17:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.310 00:33:23.310 real 0m36.087s 00:33:23.310 user 1m35.292s 00:33:23.310 sys 0m9.234s 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 ************************************ 00:33:23.310 END TEST nvmf_host_multipath_status 00:33:23.310 ************************************ 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.310 ************************************ 00:33:23.310 START TEST nvmf_discovery_remove_ifc 00:33:23.310 ************************************ 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:23.310 * Looking for test storage... 00:33:23.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:23.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.310 --rc genhtml_branch_coverage=1 00:33:23.310 --rc genhtml_function_coverage=1 00:33:23.310 --rc genhtml_legend=1 00:33:23.310 --rc geninfo_all_blocks=1 00:33:23.310 --rc geninfo_unexecuted_blocks=1 00:33:23.310 00:33:23.310 ' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:23.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.310 --rc genhtml_branch_coverage=1 00:33:23.310 --rc genhtml_function_coverage=1 00:33:23.310 --rc genhtml_legend=1 00:33:23.310 --rc geninfo_all_blocks=1 00:33:23.310 --rc geninfo_unexecuted_blocks=1 00:33:23.310 00:33:23.310 ' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:23.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.310 --rc genhtml_branch_coverage=1 00:33:23.310 --rc genhtml_function_coverage=1 00:33:23.310 --rc genhtml_legend=1 00:33:23.310 --rc geninfo_all_blocks=1 00:33:23.310 --rc geninfo_unexecuted_blocks=1 00:33:23.310 00:33:23.310 ' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:23.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.310 --rc genhtml_branch_coverage=1 00:33:23.310 --rc genhtml_function_coverage=1 00:33:23.310 --rc genhtml_legend=1 00:33:23.310 --rc geninfo_all_blocks=1 00:33:23.310 --rc geninfo_unexecuted_blocks=1 00:33:23.310 00:33:23.310 ' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:23.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.310 17:00:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:28.585 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:28.585 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:28.585 Found net devices under 0000:31:00.0: cvl_0_0 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.585 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:28.586 Found net devices under 0000:31:00.1: cvl_0_1 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.586 17:00:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:28.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:33:28.586 00:33:28.586 --- 10.0.0.2 ping statistics --- 00:33:28.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.586 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:33:28.586 00:33:28.586 --- 10.0.0.1 ping statistics --- 00:33:28.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.586 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2465915 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2465915 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2465915 ']' 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:28.586 [2024-12-06 17:00:17.074875] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:33:28.586 [2024-12-06 17:00:17.074926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.586 [2024-12-06 17:00:17.145982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.586 [2024-12-06 17:00:17.161240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.586 [2024-12-06 17:00:17.161268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.586 [2024-12-06 17:00:17.161274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.586 [2024-12-06 17:00:17.161279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.586 [2024-12-06 17:00:17.161285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.586 [2024-12-06 17:00:17.161732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.586 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.586 [2024-12-06 17:00:17.267176] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.586 [2024-12-06 17:00:17.275366] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:28.846 null0 00:33:28.846 [2024-12-06 17:00:17.307334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2465940 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2465940 /tmp/host.sock 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2465940 ']' 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:28.846 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:28.846 [2024-12-06 17:00:17.362655] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:33:28.846 [2024-12-06 17:00:17.362701] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2465940 ] 00:33:28.846 [2024-12-06 17:00:17.425696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.846 [2024-12-06 17:00:17.442200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:28.846 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:29.106 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.106 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:29.106 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.106 17:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.048 [2024-12-06 17:00:18.563309] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:30.048 [2024-12-06 17:00:18.563326] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:30.048 [2024-12-06 17:00:18.563335] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:30.048 [2024-12-06 17:00:18.690698] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:30.308 [2024-12-06 17:00:18.793464] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:30.309 [2024-12-06 17:00:18.794458] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1add8d0:1 started. 00:33:30.309 [2024-12-06 17:00:18.795594] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:30.309 [2024-12-06 17:00:18.795628] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:30.309 [2024-12-06 17:00:18.795644] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:30.309 [2024-12-06 17:00:18.795655] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:30.309 [2024-12-06 17:00:18.795670] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.309 [2024-12-06 17:00:18.803151] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1add8d0 was disconnected and freed. delete nvme_qpair. 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:30.309 17:00:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:31.706 17:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.706 17:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:31.706 17:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:32.647 17:00:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:33.588 17:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:34.528 17:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:35.595 17:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:35.595 [2024-12-06 17:00:24.236541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:33:35.595 [2024-12-06 17:00:24.236576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.595 [2024-12-06 17:00:24.236585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.596 [2024-12-06 17:00:24.236592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.596 [2024-12-06 17:00:24.236598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.596 [2024-12-06 17:00:24.236610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.596 [2024-12-06 17:00:24.236615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.596 [2024-12-06 17:00:24.236621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.596 [2024-12-06 17:00:24.236626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.596 [2024-12-06 17:00:24.236632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:35.596 [2024-12-06 17:00:24.236637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:35.596 [2024-12-06 17:00:24.236642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab9fb0 is same with the state(6) to be set 00:33:35.596 [2024-12-06 17:00:24.246563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab9fb0 (9): Bad file descriptor 00:33:35.596 [2024-12-06 17:00:24.256596] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:35.596 [2024-12-06 17:00:24.256604] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:35.596 [2024-12-06 17:00:24.256610] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:35.596 [2024-12-06 17:00:24.256614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:35.596 [2024-12-06 17:00:24.256630] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:36.533 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:36.533 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:36.533 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.533 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:36.533 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.533 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:36.533 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:36.792 [2024-12-06 17:00:25.295137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:33:36.792 [2024-12-06 17:00:25.295167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab9fb0 with addr=10.0.0.2, port=4420 00:33:36.792 [2024-12-06 17:00:25.295176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab9fb0 is same with the state(6) to be set 00:33:36.792 [2024-12-06 17:00:25.295192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab9fb0 (9): Bad file descriptor 00:33:36.792 [2024-12-06 17:00:25.295452] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:33:36.792 [2024-12-06 17:00:25.295469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:36.792 [2024-12-06 17:00:25.295475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:36.792 [2024-12-06 17:00:25.295481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:36.792 [2024-12-06 17:00:25.295486] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:36.792 [2024-12-06 17:00:25.295494] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:36.792 [2024-12-06 17:00:25.295498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:36.792 [2024-12-06 17:00:25.295503] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:36.792 [2024-12-06 17:00:25.295507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:36.792 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.792 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:36.792 17:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:37.730 [2024-12-06 17:00:26.297872] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:37.730 [2024-12-06 17:00:26.297887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:37.730 [2024-12-06 17:00:26.297895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:37.730 [2024-12-06 17:00:26.297900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:37.730 [2024-12-06 17:00:26.297906] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:33:37.730 [2024-12-06 17:00:26.297911] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:37.730 [2024-12-06 17:00:26.297915] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:37.730 [2024-12-06 17:00:26.297919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:37.730 [2024-12-06 17:00:26.297934] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:33:37.730 [2024-12-06 17:00:26.297951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:37.730 [2024-12-06 17:00:26.297958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.730 [2024-12-06 17:00:26.297966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:37.730 [2024-12-06 17:00:26.297971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.730 [2024-12-06 17:00:26.297977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:37.730 [2024-12-06 17:00:26.297983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.730 [2024-12-06 17:00:26.297989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:37.730 [2024-12-06 17:00:26.297994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.730 [2024-12-06 17:00:26.298000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:33:37.730 [2024-12-06 17:00:26.298005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.730 [2024-12-06 17:00:26.298010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:33:37.730 [2024-12-06 17:00:26.298148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa9660 (9): Bad file descriptor 00:33:37.730 [2024-12-06 17:00:26.299158] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:33:37.730 [2024-12-06 17:00:26.299168] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.731 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:37.993 17:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:38.929 17:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.864 [2024-12-06 17:00:28.350278] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:39.864 [2024-12-06 17:00:28.350293] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:39.864 [2024-12-06 17:00:28.350306] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:39.864 [2024-12-06 17:00:28.438541] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:33:39.864 17:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:39.864 [2024-12-06 17:00:28.538298] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:33:39.864 [2024-12-06 17:00:28.538984] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1abc050:1 started. 00:33:39.864 [2024-12-06 17:00:28.539869] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:39.864 [2024-12-06 17:00:28.539895] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:39.864 [2024-12-06 17:00:28.539910] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:39.864 [2024-12-06 17:00:28.539921] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:33:39.864 [2024-12-06 17:00:28.539926] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:39.864 [2024-12-06 17:00:28.547002] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1abc050 was disconnected and freed. delete nvme_qpair. 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2465940 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2465940 ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2465940 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465940 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465940' 00:33:41.242 killing process with pid 2465940 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2465940 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2465940 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:41.242 rmmod nvme_tcp 00:33:41.242 rmmod nvme_fabrics 00:33:41.242 rmmod nvme_keyring 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2465915 ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2465915 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2465915 ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2465915 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2465915 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2465915' 00:33:41.242 killing process with pid 2465915 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2465915 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2465915 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:41.242 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.243 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.243 17:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.780 17:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:43.780 00:33:43.780 real 0m20.467s 00:33:43.780 user 0m25.672s 00:33:43.780 sys 0m5.116s 00:33:43.780 17:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:43.780 17:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:43.780 ************************************ 00:33:43.780 END TEST nvmf_discovery_remove_ifc 00:33:43.780 ************************************ 00:33:43.780 17:00:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:43.780 17:00:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:43.780 17:00:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:43.780 17:00:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:43.780 ************************************ 00:33:43.780 START TEST nvmf_identify_kernel_target 00:33:43.780 ************************************ 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:33:43.780 * Looking for test storage... 00:33:43.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:43.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.780 --rc genhtml_branch_coverage=1 00:33:43.780 --rc genhtml_function_coverage=1 00:33:43.780 --rc genhtml_legend=1 00:33:43.780 --rc geninfo_all_blocks=1 00:33:43.780 --rc geninfo_unexecuted_blocks=1 00:33:43.780 00:33:43.780 ' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:43.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.780 --rc genhtml_branch_coverage=1 00:33:43.780 --rc genhtml_function_coverage=1 00:33:43.780 --rc genhtml_legend=1 00:33:43.780 --rc geninfo_all_blocks=1 00:33:43.780 --rc geninfo_unexecuted_blocks=1 00:33:43.780 00:33:43.780 ' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:43.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.780 --rc genhtml_branch_coverage=1 00:33:43.780 --rc genhtml_function_coverage=1 00:33:43.780 --rc genhtml_legend=1 00:33:43.780 --rc geninfo_all_blocks=1 00:33:43.780 --rc geninfo_unexecuted_blocks=1 00:33:43.780 00:33:43.780 ' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:43.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:43.780 --rc genhtml_branch_coverage=1 00:33:43.780 --rc genhtml_function_coverage=1 00:33:43.780 --rc genhtml_legend=1 00:33:43.780 --rc geninfo_all_blocks=1 00:33:43.780 --rc geninfo_unexecuted_blocks=1 00:33:43.780 00:33:43.780 ' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.780 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:43.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:43.781 17:00:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:49.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:49.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:49.056 Found net devices under 0000:31:00.0: cvl_0_0 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:49.056 Found net devices under 0000:31:00.1: cvl_0_1 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:49.056 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:49.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:33:49.057 00:33:49.057 --- 10.0.0.2 ping statistics --- 00:33:49.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.057 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:33:49.057 00:33:49.057 --- 10.0.0.1 ping statistics --- 00:33:49.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.057 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:49.057 17:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:51.596 Waiting for block devices as requested 00:33:51.596 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:51.596 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:51.596 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:51.596 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:51.596 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:51.596 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:51.596 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:51.855 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:51.855 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:52.115 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:52.115 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:52.115 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:52.115 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:52.115 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:52.375 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:52.375 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:52.375 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:52.635 No valid GPT data, bailing 00:33:52.635 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:33:52.897 00:33:52.897 Discovery Log Number of Records 2, Generation counter 2 00:33:52.897 =====Discovery Log Entry 0====== 00:33:52.897 trtype: tcp 00:33:52.897 adrfam: ipv4 00:33:52.897 subtype: current discovery subsystem 00:33:52.897 treq: not specified, sq flow control disable supported 00:33:52.897 portid: 1 00:33:52.897 trsvcid: 4420 00:33:52.897 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:52.897 traddr: 10.0.0.1 00:33:52.897 eflags: none 00:33:52.897 sectype: none 00:33:52.897 =====Discovery Log Entry 1====== 00:33:52.897 trtype: tcp 00:33:52.897 adrfam: ipv4 00:33:52.897 subtype: nvme subsystem 00:33:52.897 treq: not specified, sq flow control disable supported 00:33:52.897 portid: 1 00:33:52.897 trsvcid: 4420 00:33:52.897 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:52.897 traddr: 10.0.0.1 00:33:52.897 eflags: none 00:33:52.897 sectype: none 00:33:52.897 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:33:52.897 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:33:52.897 ===================================================== 00:33:52.897 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:33:52.897 ===================================================== 00:33:52.897 Controller Capabilities/Features 00:33:52.897 ================================ 00:33:52.897 Vendor ID: 0000 00:33:52.897 Subsystem Vendor ID: 0000 00:33:52.897 Serial Number: 646e747457021577c95b 00:33:52.897 Model Number: Linux 00:33:52.897 Firmware Version: 6.8.9-20 00:33:52.897 Recommended Arb Burst: 0 00:33:52.897 IEEE OUI Identifier: 00 00 00 00:33:52.897 Multi-path I/O 00:33:52.897 May have multiple subsystem ports: No 00:33:52.897 May have multiple controllers: No 00:33:52.897 Associated with SR-IOV VF: No 00:33:52.897 Max Data Transfer Size: Unlimited 00:33:52.897 Max Number of Namespaces: 0 00:33:52.897 Max Number of I/O Queues: 1024 00:33:52.897 NVMe Specification Version (VS): 1.3 00:33:52.897 NVMe Specification Version (Identify): 1.3 00:33:52.897 Maximum Queue Entries: 1024 00:33:52.897 Contiguous Queues Required: No 00:33:52.897 Arbitration Mechanisms Supported 00:33:52.897 Weighted Round Robin: Not Supported 00:33:52.897 Vendor Specific: Not Supported 00:33:52.897 Reset Timeout: 7500 ms 00:33:52.897 Doorbell Stride: 4 bytes 00:33:52.898 NVM Subsystem Reset: Not Supported 00:33:52.898 Command Sets Supported 00:33:52.898 NVM Command Set: Supported 00:33:52.898 Boot Partition: Not Supported 00:33:52.898 Memory Page Size Minimum: 4096 bytes 00:33:52.898 Memory Page Size Maximum: 4096 bytes 00:33:52.898 Persistent Memory Region: Not Supported 00:33:52.898 Optional Asynchronous Events Supported 00:33:52.898 Namespace Attribute Notices: Not Supported 00:33:52.898 Firmware Activation Notices: Not Supported 00:33:52.898 ANA Change Notices: Not Supported 00:33:52.898 PLE Aggregate Log Change Notices: Not Supported 00:33:52.898 LBA Status Info Alert Notices: Not Supported 00:33:52.898 EGE Aggregate Log Change Notices: Not Supported 00:33:52.898 Normal NVM Subsystem Shutdown event: Not Supported 00:33:52.898 Zone Descriptor Change Notices: Not Supported 00:33:52.898 Discovery Log Change Notices: Supported 00:33:52.898 Controller Attributes 00:33:52.898 128-bit Host Identifier: Not Supported 00:33:52.898 Non-Operational Permissive Mode: Not Supported 00:33:52.898 NVM Sets: Not Supported 00:33:52.898 Read Recovery Levels: Not Supported 00:33:52.898 Endurance Groups: Not Supported 00:33:52.898 Predictable Latency Mode: Not Supported 00:33:52.898 Traffic Based Keep ALive: Not Supported 00:33:52.898 Namespace Granularity: Not Supported 00:33:52.898 SQ Associations: Not Supported 00:33:52.898 UUID List: Not Supported 00:33:52.898 Multi-Domain Subsystem: Not Supported 00:33:52.898 Fixed Capacity Management: Not Supported 00:33:52.898 Variable Capacity Management: Not Supported 00:33:52.898 Delete Endurance Group: Not Supported 00:33:52.898 Delete NVM Set: Not Supported 00:33:52.898 Extended LBA Formats Supported: Not Supported 00:33:52.898 Flexible Data Placement Supported: Not Supported 00:33:52.898 00:33:52.898 Controller Memory Buffer Support 00:33:52.898 ================================ 00:33:52.898 Supported: No 00:33:52.898 00:33:52.898 Persistent Memory Region Support 00:33:52.898 ================================ 00:33:52.898 Supported: No 00:33:52.898 00:33:52.898 Admin Command Set Attributes 00:33:52.898 ============================ 00:33:52.898 Security Send/Receive: Not Supported 00:33:52.898 Format NVM: Not Supported 00:33:52.898 Firmware Activate/Download: Not Supported 00:33:52.898 Namespace Management: Not Supported 00:33:52.898 Device Self-Test: Not Supported 00:33:52.898 Directives: Not Supported 00:33:52.898 NVMe-MI: Not Supported 00:33:52.898 Virtualization Management: Not Supported 00:33:52.898 Doorbell Buffer Config: Not Supported 00:33:52.898 Get LBA Status Capability: Not Supported 00:33:52.898 Command & Feature Lockdown Capability: Not Supported 00:33:52.898 Abort Command Limit: 1 00:33:52.898 Async Event Request Limit: 1 00:33:52.898 Number of Firmware Slots: N/A 00:33:52.898 Firmware Slot 1 Read-Only: N/A 00:33:52.898 Firmware Activation Without Reset: N/A 00:33:52.898 Multiple Update Detection Support: N/A 00:33:52.898 Firmware Update Granularity: No Information Provided 00:33:52.898 Per-Namespace SMART Log: No 00:33:52.898 Asymmetric Namespace Access Log Page: Not Supported 00:33:52.898 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:33:52.898 Command Effects Log Page: Not Supported 00:33:52.898 Get Log Page Extended Data: Supported 00:33:52.898 Telemetry Log Pages: Not Supported 00:33:52.898 Persistent Event Log Pages: Not Supported 00:33:52.898 Supported Log Pages Log Page: May Support 00:33:52.898 Commands Supported & Effects Log Page: Not Supported 00:33:52.898 Feature Identifiers & Effects Log Page:May Support 00:33:52.898 NVMe-MI Commands & Effects Log Page: May Support 00:33:52.898 Data Area 4 for Telemetry Log: Not Supported 00:33:52.898 Error Log Page Entries Supported: 1 00:33:52.898 Keep Alive: Not Supported 00:33:52.898 00:33:52.898 NVM Command Set Attributes 00:33:52.898 ========================== 00:33:52.898 Submission Queue Entry Size 00:33:52.898 Max: 1 00:33:52.898 Min: 1 00:33:52.898 Completion Queue Entry Size 00:33:52.898 Max: 1 00:33:52.898 Min: 1 00:33:52.898 Number of Namespaces: 0 00:33:52.898 Compare Command: Not Supported 00:33:52.898 Write Uncorrectable Command: Not Supported 00:33:52.898 Dataset Management Command: Not Supported 00:33:52.898 Write Zeroes Command: Not Supported 00:33:52.898 Set Features Save Field: Not Supported 00:33:52.898 Reservations: Not Supported 00:33:52.898 Timestamp: Not Supported 00:33:52.898 Copy: Not Supported 00:33:52.898 Volatile Write Cache: Not Present 00:33:52.898 Atomic Write Unit (Normal): 1 00:33:52.898 Atomic Write Unit (PFail): 1 00:33:52.898 Atomic Compare & Write Unit: 1 00:33:52.898 Fused Compare & Write: Not Supported 00:33:52.898 Scatter-Gather List 00:33:52.898 SGL Command Set: Supported 00:33:52.898 SGL Keyed: Not Supported 00:33:52.898 SGL Bit Bucket Descriptor: Not Supported 00:33:52.898 SGL Metadata Pointer: Not Supported 00:33:52.898 Oversized SGL: Not Supported 00:33:52.898 SGL Metadata Address: Not Supported 00:33:52.898 SGL Offset: Supported 00:33:52.898 Transport SGL Data Block: Not Supported 00:33:52.898 Replay Protected Memory Block: Not Supported 00:33:52.898 00:33:52.898 Firmware Slot Information 00:33:52.898 ========================= 00:33:52.898 Active slot: 0 00:33:52.898 00:33:52.898 00:33:52.898 Error Log 00:33:52.898 ========= 00:33:52.898 00:33:52.898 Active Namespaces 00:33:52.898 ================= 00:33:52.898 Discovery Log Page 00:33:52.898 ================== 00:33:52.898 Generation Counter: 2 00:33:52.898 Number of Records: 2 00:33:52.898 Record Format: 0 00:33:52.898 00:33:52.898 Discovery Log Entry 0 00:33:52.898 ---------------------- 00:33:52.898 Transport Type: 3 (TCP) 00:33:52.898 Address Family: 1 (IPv4) 00:33:52.898 Subsystem Type: 3 (Current Discovery Subsystem) 00:33:52.898 Entry Flags: 00:33:52.898 Duplicate Returned Information: 0 00:33:52.898 Explicit Persistent Connection Support for Discovery: 0 00:33:52.898 Transport Requirements: 00:33:52.898 Secure Channel: Not Specified 00:33:52.898 Port ID: 1 (0x0001) 00:33:52.898 Controller ID: 65535 (0xffff) 00:33:52.898 Admin Max SQ Size: 32 00:33:52.898 Transport Service Identifier: 4420 00:33:52.898 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:33:52.898 Transport Address: 10.0.0.1 00:33:52.898 Discovery Log Entry 1 00:33:52.898 ---------------------- 00:33:52.898 Transport Type: 3 (TCP) 00:33:52.898 Address Family: 1 (IPv4) 00:33:52.898 Subsystem Type: 2 (NVM Subsystem) 00:33:52.898 Entry Flags: 00:33:52.898 Duplicate Returned Information: 0 00:33:52.898 Explicit Persistent Connection Support for Discovery: 0 00:33:52.898 Transport Requirements: 00:33:52.898 Secure Channel: Not Specified 00:33:52.898 Port ID: 1 (0x0001) 00:33:52.898 Controller ID: 65535 (0xffff) 00:33:52.898 Admin Max SQ Size: 32 00:33:52.898 Transport Service Identifier: 4420 00:33:52.898 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:33:52.898 Transport Address: 10.0.0.1 00:33:52.898 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:53.159 get_feature(0x01) failed 00:33:53.159 get_feature(0x02) failed 00:33:53.159 get_feature(0x04) failed 00:33:53.159 ===================================================== 00:33:53.159 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:53.159 ===================================================== 00:33:53.159 Controller Capabilities/Features 00:33:53.159 ================================ 00:33:53.159 Vendor ID: 0000 00:33:53.159 Subsystem Vendor ID: 0000 00:33:53.159 Serial Number: d2fc8bd7e1a9bed7f13b 00:33:53.159 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:33:53.159 Firmware Version: 6.8.9-20 00:33:53.159 Recommended Arb Burst: 6 00:33:53.159 IEEE OUI Identifier: 00 00 00 00:33:53.159 Multi-path I/O 00:33:53.159 May have multiple subsystem ports: Yes 00:33:53.159 May have multiple controllers: Yes 00:33:53.159 Associated with SR-IOV VF: No 00:33:53.159 Max Data Transfer Size: Unlimited 00:33:53.159 Max Number of Namespaces: 1024 00:33:53.159 Max Number of I/O Queues: 128 00:33:53.159 NVMe Specification Version (VS): 1.3 00:33:53.159 NVMe Specification Version (Identify): 1.3 00:33:53.159 Maximum Queue Entries: 1024 00:33:53.159 Contiguous Queues Required: No 00:33:53.159 Arbitration Mechanisms Supported 00:33:53.159 Weighted Round Robin: Not Supported 00:33:53.159 Vendor Specific: Not Supported 00:33:53.159 Reset Timeout: 7500 ms 00:33:53.159 Doorbell Stride: 4 bytes 00:33:53.159 NVM Subsystem Reset: Not Supported 00:33:53.159 Command Sets Supported 00:33:53.159 NVM Command Set: Supported 00:33:53.159 Boot Partition: Not Supported 00:33:53.159 Memory Page Size Minimum: 4096 bytes 00:33:53.159 Memory Page Size Maximum: 4096 bytes 00:33:53.159 Persistent Memory Region: Not Supported 00:33:53.159 Optional Asynchronous Events Supported 00:33:53.159 Namespace Attribute Notices: Supported 00:33:53.159 Firmware Activation Notices: Not Supported 00:33:53.159 ANA Change Notices: Supported 00:33:53.159 PLE Aggregate Log Change Notices: Not Supported 00:33:53.159 LBA Status Info Alert Notices: Not Supported 00:33:53.159 EGE Aggregate Log Change Notices: Not Supported 00:33:53.159 Normal NVM Subsystem Shutdown event: Not Supported 00:33:53.159 Zone Descriptor Change Notices: Not Supported 00:33:53.159 Discovery Log Change Notices: Not Supported 00:33:53.159 Controller Attributes 00:33:53.159 128-bit Host Identifier: Supported 00:33:53.159 Non-Operational Permissive Mode: Not Supported 00:33:53.159 NVM Sets: Not Supported 00:33:53.159 Read Recovery Levels: Not Supported 00:33:53.159 Endurance Groups: Not Supported 00:33:53.159 Predictable Latency Mode: Not Supported 00:33:53.159 Traffic Based Keep ALive: Supported 00:33:53.159 Namespace Granularity: Not Supported 00:33:53.159 SQ Associations: Not Supported 00:33:53.159 UUID List: Not Supported 00:33:53.159 Multi-Domain Subsystem: Not Supported 00:33:53.159 Fixed Capacity Management: Not Supported 00:33:53.159 Variable Capacity Management: Not Supported 00:33:53.159 Delete Endurance Group: Not Supported 00:33:53.159 Delete NVM Set: Not Supported 00:33:53.159 Extended LBA Formats Supported: Not Supported 00:33:53.159 Flexible Data Placement Supported: Not Supported 00:33:53.159 00:33:53.159 Controller Memory Buffer Support 00:33:53.159 ================================ 00:33:53.159 Supported: No 00:33:53.159 00:33:53.159 Persistent Memory Region Support 00:33:53.159 ================================ 00:33:53.159 Supported: No 00:33:53.159 00:33:53.159 Admin Command Set Attributes 00:33:53.159 ============================ 00:33:53.159 Security Send/Receive: Not Supported 00:33:53.159 Format NVM: Not Supported 00:33:53.159 Firmware Activate/Download: Not Supported 00:33:53.159 Namespace Management: Not Supported 00:33:53.159 Device Self-Test: Not Supported 00:33:53.159 Directives: Not Supported 00:33:53.159 NVMe-MI: Not Supported 00:33:53.159 Virtualization Management: Not Supported 00:33:53.160 Doorbell Buffer Config: Not Supported 00:33:53.160 Get LBA Status Capability: Not Supported 00:33:53.160 Command & Feature Lockdown Capability: Not Supported 00:33:53.160 Abort Command Limit: 4 00:33:53.160 Async Event Request Limit: 4 00:33:53.160 Number of Firmware Slots: N/A 00:33:53.160 Firmware Slot 1 Read-Only: N/A 00:33:53.160 Firmware Activation Without Reset: N/A 00:33:53.160 Multiple Update Detection Support: N/A 00:33:53.160 Firmware Update Granularity: No Information Provided 00:33:53.160 Per-Namespace SMART Log: Yes 00:33:53.160 Asymmetric Namespace Access Log Page: Supported 00:33:53.160 ANA Transition Time : 10 sec 00:33:53.160 00:33:53.160 Asymmetric Namespace Access Capabilities 00:33:53.160 ANA Optimized State : Supported 00:33:53.160 ANA Non-Optimized State : Supported 00:33:53.160 ANA Inaccessible State : Supported 00:33:53.160 ANA Persistent Loss State : Supported 00:33:53.160 ANA Change State : Supported 00:33:53.160 ANAGRPID is not changed : No 00:33:53.160 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:33:53.160 00:33:53.160 ANA Group Identifier Maximum : 128 00:33:53.160 Number of ANA Group Identifiers : 128 00:33:53.160 Max Number of Allowed Namespaces : 1024 00:33:53.160 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:33:53.160 Command Effects Log Page: Supported 00:33:53.160 Get Log Page Extended Data: Supported 00:33:53.160 Telemetry Log Pages: Not Supported 00:33:53.160 Persistent Event Log Pages: Not Supported 00:33:53.160 Supported Log Pages Log Page: May Support 00:33:53.160 Commands Supported & Effects Log Page: Not Supported 00:33:53.160 Feature Identifiers & Effects Log Page:May Support 00:33:53.160 NVMe-MI Commands & Effects Log Page: May Support 00:33:53.160 Data Area 4 for Telemetry Log: Not Supported 00:33:53.160 Error Log Page Entries Supported: 128 00:33:53.160 Keep Alive: Supported 00:33:53.160 Keep Alive Granularity: 1000 ms 00:33:53.160 00:33:53.160 NVM Command Set Attributes 00:33:53.160 ========================== 00:33:53.160 Submission Queue Entry Size 00:33:53.160 Max: 64 00:33:53.160 Min: 64 00:33:53.160 Completion Queue Entry Size 00:33:53.160 Max: 16 00:33:53.160 Min: 16 00:33:53.160 Number of Namespaces: 1024 00:33:53.160 Compare Command: Not Supported 00:33:53.160 Write Uncorrectable Command: Not Supported 00:33:53.160 Dataset Management Command: Supported 00:33:53.160 Write Zeroes Command: Supported 00:33:53.160 Set Features Save Field: Not Supported 00:33:53.160 Reservations: Not Supported 00:33:53.160 Timestamp: Not Supported 00:33:53.160 Copy: Not Supported 00:33:53.160 Volatile Write Cache: Present 00:33:53.160 Atomic Write Unit (Normal): 1 00:33:53.160 Atomic Write Unit (PFail): 1 00:33:53.160 Atomic Compare & Write Unit: 1 00:33:53.160 Fused Compare & Write: Not Supported 00:33:53.160 Scatter-Gather List 00:33:53.160 SGL Command Set: Supported 00:33:53.160 SGL Keyed: Not Supported 00:33:53.160 SGL Bit Bucket Descriptor: Not Supported 00:33:53.160 SGL Metadata Pointer: Not Supported 00:33:53.160 Oversized SGL: Not Supported 00:33:53.160 SGL Metadata Address: Not Supported 00:33:53.160 SGL Offset: Supported 00:33:53.160 Transport SGL Data Block: Not Supported 00:33:53.160 Replay Protected Memory Block: Not Supported 00:33:53.160 00:33:53.160 Firmware Slot Information 00:33:53.160 ========================= 00:33:53.160 Active slot: 0 00:33:53.160 00:33:53.160 Asymmetric Namespace Access 00:33:53.160 =========================== 00:33:53.160 Change Count : 0 00:33:53.160 Number of ANA Group Descriptors : 1 00:33:53.160 ANA Group Descriptor : 0 00:33:53.160 ANA Group ID : 1 00:33:53.160 Number of NSID Values : 1 00:33:53.160 Change Count : 0 00:33:53.160 ANA State : 1 00:33:53.160 Namespace Identifier : 1 00:33:53.160 00:33:53.160 Commands Supported and Effects 00:33:53.160 ============================== 00:33:53.160 Admin Commands 00:33:53.160 -------------- 00:33:53.160 Get Log Page (02h): Supported 00:33:53.160 Identify (06h): Supported 00:33:53.160 Abort (08h): Supported 00:33:53.160 Set Features (09h): Supported 00:33:53.160 Get Features (0Ah): Supported 00:33:53.160 Asynchronous Event Request (0Ch): Supported 00:33:53.160 Keep Alive (18h): Supported 00:33:53.160 I/O Commands 00:33:53.160 ------------ 00:33:53.160 Flush (00h): Supported 00:33:53.160 Write (01h): Supported LBA-Change 00:33:53.160 Read (02h): Supported 00:33:53.160 Write Zeroes (08h): Supported LBA-Change 00:33:53.160 Dataset Management (09h): Supported 00:33:53.160 00:33:53.160 Error Log 00:33:53.160 ========= 00:33:53.160 Entry: 0 00:33:53.160 Error Count: 0x3 00:33:53.160 Submission Queue Id: 0x0 00:33:53.160 Command Id: 0x5 00:33:53.160 Phase Bit: 0 00:33:53.160 Status Code: 0x2 00:33:53.160 Status Code Type: 0x0 00:33:53.160 Do Not Retry: 1 00:33:53.160 Error Location: 0x28 00:33:53.160 LBA: 0x0 00:33:53.160 Namespace: 0x0 00:33:53.160 Vendor Log Page: 0x0 00:33:53.160 ----------- 00:33:53.160 Entry: 1 00:33:53.160 Error Count: 0x2 00:33:53.160 Submission Queue Id: 0x0 00:33:53.160 Command Id: 0x5 00:33:53.160 Phase Bit: 0 00:33:53.160 Status Code: 0x2 00:33:53.160 Status Code Type: 0x0 00:33:53.160 Do Not Retry: 1 00:33:53.160 Error Location: 0x28 00:33:53.160 LBA: 0x0 00:33:53.160 Namespace: 0x0 00:33:53.160 Vendor Log Page: 0x0 00:33:53.160 ----------- 00:33:53.160 Entry: 2 00:33:53.160 Error Count: 0x1 00:33:53.160 Submission Queue Id: 0x0 00:33:53.160 Command Id: 0x4 00:33:53.160 Phase Bit: 0 00:33:53.160 Status Code: 0x2 00:33:53.160 Status Code Type: 0x0 00:33:53.160 Do Not Retry: 1 00:33:53.160 Error Location: 0x28 00:33:53.160 LBA: 0x0 00:33:53.160 Namespace: 0x0 00:33:53.160 Vendor Log Page: 0x0 00:33:53.160 00:33:53.160 Number of Queues 00:33:53.160 ================ 00:33:53.160 Number of I/O Submission Queues: 128 00:33:53.160 Number of I/O Completion Queues: 128 00:33:53.160 00:33:53.160 ZNS Specific Controller Data 00:33:53.160 ============================ 00:33:53.160 Zone Append Size Limit: 0 00:33:53.160 00:33:53.160 00:33:53.160 Active Namespaces 00:33:53.160 ================= 00:33:53.160 get_feature(0x05) failed 00:33:53.160 Namespace ID:1 00:33:53.160 Command Set Identifier: NVM (00h) 00:33:53.160 Deallocate: Supported 00:33:53.160 Deallocated/Unwritten Error: Not Supported 00:33:53.160 Deallocated Read Value: Unknown 00:33:53.160 Deallocate in Write Zeroes: Not Supported 00:33:53.160 Deallocated Guard Field: 0xFFFF 00:33:53.160 Flush: Supported 00:33:53.160 Reservation: Not Supported 00:33:53.160 Namespace Sharing Capabilities: Multiple Controllers 00:33:53.160 Size (in LBAs): 3750748848 (1788GiB) 00:33:53.160 Capacity (in LBAs): 3750748848 (1788GiB) 00:33:53.160 Utilization (in LBAs): 3750748848 (1788GiB) 00:33:53.160 UUID: 2a87d891-3163-445d-8321-855fc779ea6f 00:33:53.160 Thin Provisioning: Not Supported 00:33:53.160 Per-NS Atomic Units: Yes 00:33:53.160 Atomic Write Unit (Normal): 8 00:33:53.160 Atomic Write Unit (PFail): 8 00:33:53.160 Preferred Write Granularity: 8 00:33:53.160 Atomic Compare & Write Unit: 8 00:33:53.160 Atomic Boundary Size (Normal): 0 00:33:53.160 Atomic Boundary Size (PFail): 0 00:33:53.160 Atomic Boundary Offset: 0 00:33:53.160 NGUID/EUI64 Never Reused: No 00:33:53.160 ANA group ID: 1 00:33:53.160 Namespace Write Protected: No 00:33:53.160 Number of LBA Formats: 1 00:33:53.160 Current LBA Format: LBA Format #00 00:33:53.160 LBA Format #00: Data Size: 512 Metadata Size: 0 00:33:53.160 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.160 rmmod nvme_tcp 00:33:53.160 rmmod nvme_fabrics 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.160 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.161 17:00:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:55.068 17:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:57.603 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:57.603 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:58.171 00:33:58.171 real 0m14.588s 00:33:58.171 user 0m3.482s 00:33:58.171 sys 0m8.063s 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:33:58.171 ************************************ 00:33:58.171 END TEST nvmf_identify_kernel_target 00:33:58.171 ************************************ 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:58.171 ************************************ 00:33:58.171 START TEST nvmf_auth_host 00:33:58.171 ************************************ 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:33:58.171 * Looking for test storage... 00:33:58.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:58.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.171 --rc genhtml_branch_coverage=1 00:33:58.171 --rc genhtml_function_coverage=1 00:33:58.171 --rc genhtml_legend=1 00:33:58.171 --rc geninfo_all_blocks=1 00:33:58.171 --rc geninfo_unexecuted_blocks=1 00:33:58.171 00:33:58.171 ' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:58.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.171 --rc genhtml_branch_coverage=1 00:33:58.171 --rc genhtml_function_coverage=1 00:33:58.171 --rc genhtml_legend=1 00:33:58.171 --rc geninfo_all_blocks=1 00:33:58.171 --rc geninfo_unexecuted_blocks=1 00:33:58.171 00:33:58.171 ' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:58.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.171 --rc genhtml_branch_coverage=1 00:33:58.171 --rc genhtml_function_coverage=1 00:33:58.171 --rc genhtml_legend=1 00:33:58.171 --rc geninfo_all_blocks=1 00:33:58.171 --rc geninfo_unexecuted_blocks=1 00:33:58.171 00:33:58.171 ' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:58.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:58.171 --rc genhtml_branch_coverage=1 00:33:58.171 --rc genhtml_function_coverage=1 00:33:58.171 --rc genhtml_legend=1 00:33:58.171 --rc geninfo_all_blocks=1 00:33:58.171 --rc geninfo_unexecuted_blocks=1 00:33:58.171 00:33:58.171 ' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:58.171 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:58.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:58.172 17:00:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:03.445 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:03.445 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.445 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:03.446 Found net devices under 0000:31:00.0: cvl_0_0 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:03.446 Found net devices under 0000:31:00.1: cvl_0_1 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:03.446 17:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:03.446 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:03.446 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.446 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:03.446 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:03.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:34:03.705 00:34:03.705 --- 10.0.0.2 ping statistics --- 00:34:03.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.705 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:03.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:34:03.705 00:34:03.705 --- 10.0.0.1 ping statistics --- 00:34:03.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.705 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2480683 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2480683 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2480683 ']' 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.705 17:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:04.639 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eb4e961b5286f809a90e875e971cedd2 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.SdL 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eb4e961b5286f809a90e875e971cedd2 0 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eb4e961b5286f809a90e875e971cedd2 0 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eb4e961b5286f809a90e875e971cedd2 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.SdL 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.SdL 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.SdL 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a59807767febf5426b7e005f2e5ddab49c75f8608dba020ed513c9f25faa31b7 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.s0q 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a59807767febf5426b7e005f2e5ddab49c75f8608dba020ed513c9f25faa31b7 3 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a59807767febf5426b7e005f2e5ddab49c75f8608dba020ed513c9f25faa31b7 3 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a59807767febf5426b7e005f2e5ddab49c75f8608dba020ed513c9f25faa31b7 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.s0q 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.s0q 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.s0q 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1fea969513f4b2c03f563dbdf270b7a1c36cd6622e451eb9 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Sqy 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1fea969513f4b2c03f563dbdf270b7a1c36cd6622e451eb9 0 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1fea969513f4b2c03f563dbdf270b7a1c36cd6622e451eb9 0 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1fea969513f4b2c03f563dbdf270b7a1c36cd6622e451eb9 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Sqy 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Sqy 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Sqy 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c2e08be04539283ed061f0859c2aa06bc78bb15277d71285 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nMi 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c2e08be04539283ed061f0859c2aa06bc78bb15277d71285 2 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c2e08be04539283ed061f0859c2aa06bc78bb15277d71285 2 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c2e08be04539283ed061f0859c2aa06bc78bb15277d71285 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nMi 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nMi 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.nMi 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=88e7be2e4c61792e06faaae1d3870902 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.E5J 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 88e7be2e4c61792e06faaae1d3870902 1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 88e7be2e4c61792e06faaae1d3870902 1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=88e7be2e4c61792e06faaae1d3870902 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:04.640 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.E5J 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.E5J 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.E5J 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=45d710303ffa8edf54c2cf0ab9fe69e9 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ROL 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 45d710303ffa8edf54c2cf0ab9fe69e9 1 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 45d710303ffa8edf54c2cf0ab9fe69e9 1 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=45d710303ffa8edf54c2cf0ab9fe69e9 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ROL 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ROL 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ROL 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dee0d16b47ef3a11b4ff10bcf856a6295ff55a252d4fe9a4 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Nuo 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dee0d16b47ef3a11b4ff10bcf856a6295ff55a252d4fe9a4 2 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dee0d16b47ef3a11b4ff10bcf856a6295ff55a252d4fe9a4 2 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dee0d16b47ef3a11b4ff10bcf856a6295ff55a252d4fe9a4 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Nuo 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Nuo 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Nuo 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:04.900 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2be0621805728dee3409ccd207f4dc72 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3cs 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2be0621805728dee3409ccd207f4dc72 0 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2be0621805728dee3409ccd207f4dc72 0 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2be0621805728dee3409ccd207f4dc72 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3cs 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3cs 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3cs 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0d1b70077287ad5c467e48739bbc569c16b7fdcae4aa4c39a60775e431a4bf08 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.iYa 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0d1b70077287ad5c467e48739bbc569c16b7fdcae4aa4c39a60775e431a4bf08 3 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0d1b70077287ad5c467e48739bbc569c16b7fdcae4aa4c39a60775e431a4bf08 3 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0d1b70077287ad5c467e48739bbc569c16b7fdcae4aa4c39a60775e431a4bf08 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.iYa 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.iYa 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.iYa 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2480683 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2480683 ']' 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.901 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SdL 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.s0q ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.s0q 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Sqy 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.nMi ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nMi 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.E5J 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ROL ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ROL 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Nuo 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3cs ]] 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3cs 00:34:05.161 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.iYa 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:05.162 17:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:07.707 Waiting for block devices as requested 00:34:07.707 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:07.707 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:07.707 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:07.707 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:07.707 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:07.965 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:07.965 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:07.965 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:07.965 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:34:08.224 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:34:08.224 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:34:08.224 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:34:08.224 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:34:08.483 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:34:08.483 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:34:08.483 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:34:08.483 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:09.420 No valid GPT data, bailing 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:09.420 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:34:09.421 00:34:09.421 Discovery Log Number of Records 2, Generation counter 2 00:34:09.421 =====Discovery Log Entry 0====== 00:34:09.421 trtype: tcp 00:34:09.421 adrfam: ipv4 00:34:09.421 subtype: current discovery subsystem 00:34:09.421 treq: not specified, sq flow control disable supported 00:34:09.421 portid: 1 00:34:09.421 trsvcid: 4420 00:34:09.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:09.421 traddr: 10.0.0.1 00:34:09.421 eflags: none 00:34:09.421 sectype: none 00:34:09.421 =====Discovery Log Entry 1====== 00:34:09.421 trtype: tcp 00:34:09.421 adrfam: ipv4 00:34:09.421 subtype: nvme subsystem 00:34:09.421 treq: not specified, sq flow control disable supported 00:34:09.421 portid: 1 00:34:09.421 trsvcid: 4420 00:34:09.421 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:09.421 traddr: 10.0.0.1 00:34:09.421 eflags: none 00:34:09.421 sectype: none 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.421 17:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.421 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.683 nvme0n1 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.683 nvme0n1 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.683 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.684 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.684 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.684 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.684 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.944 nvme0n1 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:09.944 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:09.945 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:09.945 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:09.945 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:09.945 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.945 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.205 nvme0n1 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.205 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.206 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.466 nvme0n1 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.466 17:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.466 nvme0n1 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.466 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.467 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.726 nvme0n1 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.726 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.727 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.987 nvme0n1 00:34:10.987 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.987 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:10.987 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.988 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.248 nvme0n1 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:11.248 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.249 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.508 nvme0n1 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.508 17:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.508 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.509 nvme0n1 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.509 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:11.769 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.770 nvme0n1 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:11.770 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.030 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.292 nvme0n1 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.292 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.553 nvme0n1 00:34:12.553 17:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.553 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.554 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.814 nvme0n1 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:12.814 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.815 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.075 nvme0n1 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.075 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.076 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.336 nvme0n1 00:34:13.336 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.336 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.336 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.336 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.336 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.336 17:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:13.336 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.337 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:13.337 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.337 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.597 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.598 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 nvme0n1 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:13.858 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:13.859 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:13.859 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.859 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.428 nvme0n1 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.428 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.429 17:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.689 nvme0n1 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.689 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.275 nvme0n1 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.275 17:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.844 nvme0n1 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:15.844 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.845 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.414 nvme0n1 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.414 17:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.981 nvme0n1 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.981 17:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.549 nvme0n1 00:34:17.549 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.549 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:17.549 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.550 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:17.808 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:17.809 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:17.809 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:17.809 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:17.809 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.809 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.377 nvme0n1 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.377 17:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.377 nvme0n1 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.377 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.378 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.378 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.378 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:18.378 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.637 nvme0n1 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.637 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.896 nvme0n1 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.897 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.157 nvme0n1 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.157 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.158 nvme0n1 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.158 17:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.418 nvme0n1 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.418 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.677 nvme0n1 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.677 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.935 nvme0n1 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.935 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.936 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.195 nvme0n1 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.195 nvme0n1 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.195 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.454 17:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.454 nvme0n1 00:34:20.454 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.455 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.455 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.455 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.455 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.714 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.974 nvme0n1 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.974 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.233 nvme0n1 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.233 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.234 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.494 nvme0n1 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.494 17:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.494 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.495 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.495 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:21.495 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.495 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.755 nvme0n1 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.755 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.015 nvme0n1 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.015 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.016 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.277 17:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.610 nvme0n1 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.610 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.886 nvme0n1 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:22.886 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.887 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.456 nvme0n1 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.456 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.457 17:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.717 nvme0n1 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.717 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.286 nvme0n1 00:34:24.286 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.286 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:24.286 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:24.286 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.286 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.286 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.545 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:24.545 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:24.545 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.545 17:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.545 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.546 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.115 nvme0n1 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.115 17:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.685 nvme0n1 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.685 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.254 nvme0n1 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.254 17:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.191 nvme0n1 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:27.191 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.192 nvme0n1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.192 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.453 nvme0n1 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.453 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.454 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.454 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.454 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.454 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.454 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:27.454 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.454 17:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.454 nvme0n1 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.454 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.714 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.771 nvme0n1 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:27.771 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.772 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.032 nvme0n1 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:28.032 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.033 nvme0n1 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.033 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.293 nvme0n1 00:34:28.293 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.294 17:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.553 nvme0n1 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.553 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.812 nvme0n1 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.812 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.071 nvme0n1 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.071 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.330 nvme0n1 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.330 17:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.589 nvme0n1 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.589 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 nvme0n1 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.848 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.107 nvme0n1 00:34:30.107 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.107 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.108 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.367 nvme0n1 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:30.367 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.368 17:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.936 nvme0n1 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.937 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 nvme0n1 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.198 17:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.769 nvme0n1 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:31.769 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:31.770 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:31.770 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:31.770 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.770 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.029 nvme0n1 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.029 17:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.599 nvme0n1 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWI0ZTk2MWI1Mjg2ZjgwOWE5MGU4NzVlOTcxY2VkZDKPbeLq: 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTU5ODA3NzY3ZmViZjU0MjZiN2UwMDVmMmU1ZGRhYjQ5Yzc1Zjg2MDhkYmEwMjBlZDUxM2M5ZjI1ZmFhMzFiN/oNR/w=: 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.599 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.168 nvme0n1 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.168 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.169 17:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.738 nvme0n1 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.738 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.309 nvme0n1 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGVlMGQxNmI0N2VmM2ExMWI0ZmYxMGJjZjg1NmE2Mjk1ZmY1NWEyNTJkNGZlOWE0uQnO8Q==: 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: ]] 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJlMDYyMTgwNTcyOGRlZTM0MDljY2QyMDdmNGRjNzLvHP9d: 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.309 17:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.309 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.570 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.141 nvme0n1 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGQxYjcwMDc3Mjg3YWQ1YzQ2N2U0ODczOWJiYzU2OWMxNmI3ZmRjYWU0YWE0YzM5YTYwNzc1ZTQzMWE0YmYwOFWY340=: 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.141 17:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.710 nvme0n1 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.710 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.711 request: 00:34:35.711 { 00:34:35.711 "name": "nvme0", 00:34:35.711 "trtype": "tcp", 00:34:35.711 "traddr": "10.0.0.1", 00:34:35.711 "adrfam": "ipv4", 00:34:35.711 "trsvcid": "4420", 00:34:35.711 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.711 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.711 "prchk_reftag": false, 00:34:35.711 "prchk_guard": false, 00:34:35.711 "hdgst": false, 00:34:35.711 "ddgst": false, 00:34:35.711 "allow_unrecognized_csi": false, 00:34:35.711 "method": "bdev_nvme_attach_controller", 00:34:35.711 "req_id": 1 00:34:35.711 } 00:34:35.711 Got JSON-RPC error response 00:34:35.711 response: 00:34:35.711 { 00:34:35.711 "code": -5, 00:34:35.711 "message": "Input/output error" 00:34:35.711 } 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.711 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.972 request: 00:34:35.972 { 00:34:35.972 "name": "nvme0", 00:34:35.972 "trtype": "tcp", 00:34:35.972 "traddr": "10.0.0.1", 00:34:35.972 "adrfam": "ipv4", 00:34:35.972 "trsvcid": "4420", 00:34:35.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.972 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.972 "prchk_reftag": false, 00:34:35.972 "prchk_guard": false, 00:34:35.972 "hdgst": false, 00:34:35.972 "ddgst": false, 00:34:35.972 "dhchap_key": "key2", 00:34:35.972 "allow_unrecognized_csi": false, 00:34:35.972 "method": "bdev_nvme_attach_controller", 00:34:35.972 "req_id": 1 00:34:35.972 } 00:34:35.972 Got JSON-RPC error response 00:34:35.972 response: 00:34:35.972 { 00:34:35.972 "code": -5, 00:34:35.972 "message": "Input/output error" 00:34:35.972 } 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.972 request: 00:34:35.972 { 00:34:35.972 "name": "nvme0", 00:34:35.972 "trtype": "tcp", 00:34:35.972 "traddr": "10.0.0.1", 00:34:35.972 "adrfam": "ipv4", 00:34:35.972 "trsvcid": "4420", 00:34:35.972 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:34:35.972 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:34:35.972 "prchk_reftag": false, 00:34:35.972 "prchk_guard": false, 00:34:35.972 "hdgst": false, 00:34:35.972 "ddgst": false, 00:34:35.972 "dhchap_key": "key1", 00:34:35.972 "dhchap_ctrlr_key": "ckey2", 00:34:35.972 "allow_unrecognized_csi": false, 00:34:35.972 "method": "bdev_nvme_attach_controller", 00:34:35.972 "req_id": 1 00:34:35.972 } 00:34:35.972 Got JSON-RPC error response 00:34:35.972 response: 00:34:35.972 { 00:34:35.972 "code": -5, 00:34:35.972 "message": "Input/output error" 00:34:35.972 } 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:35.972 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.973 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:35.973 nvme0n1 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.233 request: 00:34:36.233 { 00:34:36.233 "name": "nvme0", 00:34:36.233 "dhchap_key": "key1", 00:34:36.233 "dhchap_ctrlr_key": "ckey2", 00:34:36.233 "method": "bdev_nvme_set_keys", 00:34:36.233 "req_id": 1 00:34:36.233 } 00:34:36.233 Got JSON-RPC error response 00:34:36.233 response: 00:34:36.233 { 00:34:36.233 "code": -13, 00:34:36.233 "message": "Permission denied" 00:34:36.233 } 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:36.233 17:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:37.172 17:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.172 17:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.172 17:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:37.172 17:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.172 17:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.431 17:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:34:37.431 17:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWZlYTk2OTUxM2Y0YjJjMDNmNTYzZGJkZjI3MGI3YTFjMzZjZDY2MjJlNDUxZWI5FzK9SA==: 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: ]] 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzJlMDhiZTA0NTM5MjgzZWQwNjFmMDg1OWMyYWEwNmJjNzhiYjE1Mjc3ZDcxMjg1sAhTTQ==: 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.368 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.369 17:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.369 nvme0n1 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODhlN2JlMmU0YzYxNzkyZTA2ZmFhYWUxZDM4NzA5MDJjyNMf: 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: ]] 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDVkNzEwMzAzZmZhOGVkZjU0YzJjZjBhYjlmZTY5ZTkrTa46: 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.369 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.630 request: 00:34:38.630 { 00:34:38.630 "name": "nvme0", 00:34:38.630 "dhchap_key": "key2", 00:34:38.630 "dhchap_ctrlr_key": "ckey1", 00:34:38.630 "method": "bdev_nvme_set_keys", 00:34:38.630 "req_id": 1 00:34:38.630 } 00:34:38.630 Got JSON-RPC error response 00:34:38.630 response: 00:34:38.630 { 00:34:38.630 "code": -13, 00:34:38.630 "message": "Permission denied" 00:34:38.630 } 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:34:38.630 17:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.568 rmmod nvme_tcp 00:34:39.568 rmmod nvme_fabrics 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2480683 ']' 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2480683 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2480683 ']' 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2480683 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2480683 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2480683' 00:34:39.568 killing process with pid 2480683 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2480683 00:34:39.568 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2480683 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.826 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.827 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.827 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.827 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.827 17:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:41.732 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:41.992 17:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:44.525 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:34:44.525 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:34:44.784 17:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.SdL /tmp/spdk.key-null.Sqy /tmp/spdk.key-sha256.E5J /tmp/spdk.key-sha384.Nuo /tmp/spdk.key-sha512.iYa /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:34:44.784 17:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:47.317 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:47.317 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:34:47.318 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:34:47.318 00:34:47.318 real 0m49.261s 00:34:47.318 user 0m43.246s 00:34:47.318 sys 0m11.464s 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.318 ************************************ 00:34:47.318 END TEST nvmf_auth_host 00:34:47.318 ************************************ 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.318 ************************************ 00:34:47.318 START TEST nvmf_digest 00:34:47.318 ************************************ 00:34:47.318 17:01:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:34:47.576 * Looking for test storage... 00:34:47.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:47.576 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:47.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.576 --rc genhtml_branch_coverage=1 00:34:47.576 --rc genhtml_function_coverage=1 00:34:47.576 --rc genhtml_legend=1 00:34:47.576 --rc geninfo_all_blocks=1 00:34:47.576 --rc geninfo_unexecuted_blocks=1 00:34:47.576 00:34:47.576 ' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:47.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.577 --rc genhtml_branch_coverage=1 00:34:47.577 --rc genhtml_function_coverage=1 00:34:47.577 --rc genhtml_legend=1 00:34:47.577 --rc geninfo_all_blocks=1 00:34:47.577 --rc geninfo_unexecuted_blocks=1 00:34:47.577 00:34:47.577 ' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:47.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.577 --rc genhtml_branch_coverage=1 00:34:47.577 --rc genhtml_function_coverage=1 00:34:47.577 --rc genhtml_legend=1 00:34:47.577 --rc geninfo_all_blocks=1 00:34:47.577 --rc geninfo_unexecuted_blocks=1 00:34:47.577 00:34:47.577 ' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:47.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:47.577 --rc genhtml_branch_coverage=1 00:34:47.577 --rc genhtml_function_coverage=1 00:34:47.577 --rc genhtml_legend=1 00:34:47.577 --rc geninfo_all_blocks=1 00:34:47.577 --rc geninfo_unexecuted_blocks=1 00:34:47.577 00:34:47.577 ' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:47.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:34:47.577 17:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:52.849 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:52.849 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:52.849 Found net devices under 0000:31:00.0: cvl_0_0 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:52.849 Found net devices under 0000:31:00.1: cvl_0_1 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:52.849 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:52.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:52.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:34:52.850 00:34:52.850 --- 10.0.0.2 ping statistics --- 00:34:52.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.850 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:52.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:52.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:34:52.850 00:34:52.850 --- 10.0.0.1 ping statistics --- 00:34:52.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:52.850 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:52.850 ************************************ 00:34:52.850 START TEST nvmf_digest_clean 00:34:52.850 ************************************ 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2497582 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2497582 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2497582 ']' 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:52.850 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:34:52.850 [2024-12-06 17:01:41.520578] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:34:52.850 [2024-12-06 17:01:41.520625] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:53.110 [2024-12-06 17:01:41.603855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.110 [2024-12-06 17:01:41.620745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:53.110 [2024-12-06 17:01:41.620778] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:53.110 [2024-12-06 17:01:41.620787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:53.110 [2024-12-06 17:01:41.620795] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:53.110 [2024-12-06 17:01:41.620801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:53.110 [2024-12-06 17:01:41.621342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.110 null0 00:34:53.110 [2024-12-06 17:01:41.744144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:53.110 [2024-12-06 17:01:41.768394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2497628 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2497628 /var/tmp/bperf.sock 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2497628 ']' 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:53.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:53.110 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:53.370 [2024-12-06 17:01:41.808370] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:34:53.370 [2024-12-06 17:01:41.808425] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2497628 ] 00:34:53.370 [2024-12-06 17:01:41.877284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.370 [2024-12-06 17:01:41.897683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:53.370 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:53.370 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:53.370 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:53.370 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:53.370 17:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:53.630 17:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.630 17:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:53.888 nvme0n1 00:34:53.889 17:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:53.889 17:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:54.148 Running I/O for 2 seconds... 00:34:56.019 20001.00 IOPS, 78.13 MiB/s [2024-12-06T16:01:44.712Z] 20170.50 IOPS, 78.79 MiB/s 00:34:56.019 Latency(us) 00:34:56.019 [2024-12-06T16:01:44.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.019 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:56.019 nvme0n1 : 2.01 20183.86 78.84 0.00 0.00 6333.31 1993.39 22391.47 00:34:56.019 [2024-12-06T16:01:44.712Z] =================================================================================================================== 00:34:56.019 [2024-12-06T16:01:44.712Z] Total : 20183.86 78.84 0.00 0.00 6333.31 1993.39 22391.47 00:34:56.019 { 00:34:56.019 "results": [ 00:34:56.019 { 00:34:56.019 "job": "nvme0n1", 00:34:56.019 "core_mask": "0x2", 00:34:56.019 "workload": "randread", 00:34:56.019 "status": "finished", 00:34:56.019 "queue_depth": 128, 00:34:56.019 "io_size": 4096, 00:34:56.019 "runtime": 2.005018, 00:34:56.019 "iops": 20183.8586985254, 00:34:56.019 "mibps": 78.84319804111485, 00:34:56.019 "io_failed": 0, 00:34:56.019 "io_timeout": 0, 00:34:56.019 "avg_latency_us": 6333.309139670695, 00:34:56.019 "min_latency_us": 1993.3866666666668, 00:34:56.019 "max_latency_us": 22391.466666666667 00:34:56.019 } 00:34:56.019 ], 00:34:56.019 "core_count": 1 00:34:56.019 } 00:34:56.019 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:56.019 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:56.019 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:56.019 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:56.019 | select(.opcode=="crc32c") 00:34:56.019 | "\(.module_name) \(.executed)"' 00:34:56.019 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2497628 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2497628 ']' 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2497628 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497628 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497628' 00:34:56.279 killing process with pid 2497628 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2497628 00:34:56.279 Received shutdown signal, test time was about 2.000000 seconds 00:34:56.279 00:34:56.279 Latency(us) 00:34:56.279 [2024-12-06T16:01:44.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:56.279 [2024-12-06T16:01:44.972Z] =================================================================================================================== 00:34:56.279 [2024-12-06T16:01:44.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2497628 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2498302 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2498302 /var/tmp/bperf.sock 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2498302 ']' 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:56.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:56.279 17:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:34:56.279 [2024-12-06 17:01:44.969790] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:34:56.279 [2024-12-06 17:01:44.969834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498302 ] 00:34:56.279 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:56.279 Zero copy mechanism will not be used. 00:34:56.538 [2024-12-06 17:01:45.025863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.538 [2024-12-06 17:01:45.040360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:56.538 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.538 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:56.538 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:56.538 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:56.538 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:56.797 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:56.797 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:57.055 nvme0n1 00:34:57.055 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:34:57.055 17:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:57.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:57.055 Zero copy mechanism will not be used. 00:34:57.055 Running I/O for 2 seconds... 00:34:59.370 4104.00 IOPS, 513.00 MiB/s [2024-12-06T16:01:48.063Z] 3845.50 IOPS, 480.69 MiB/s 00:34:59.370 Latency(us) 00:34:59.370 [2024-12-06T16:01:48.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.370 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:59.370 nvme0n1 : 2.00 3850.56 481.32 0.00 0.00 4152.31 518.83 7427.41 00:34:59.370 [2024-12-06T16:01:48.063Z] =================================================================================================================== 00:34:59.370 [2024-12-06T16:01:48.063Z] Total : 3850.56 481.32 0.00 0.00 4152.31 518.83 7427.41 00:34:59.370 { 00:34:59.370 "results": [ 00:34:59.370 { 00:34:59.370 "job": "nvme0n1", 00:34:59.370 "core_mask": "0x2", 00:34:59.370 "workload": "randread", 00:34:59.370 "status": "finished", 00:34:59.370 "queue_depth": 16, 00:34:59.370 "io_size": 131072, 00:34:59.370 "runtime": 2.001527, 00:34:59.370 "iops": 3850.5600973656615, 00:34:59.370 "mibps": 481.3200121707077, 00:34:59.370 "io_failed": 0, 00:34:59.370 "io_timeout": 0, 00:34:59.370 "avg_latency_us": 4152.310699364215, 00:34:59.370 "min_latency_us": 518.8266666666667, 00:34:59.370 "max_latency_us": 7427.413333333333 00:34:59.370 } 00:34:59.370 ], 00:34:59.370 "core_count": 1 00:34:59.370 } 00:34:59.370 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:34:59.370 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:34:59.371 | select(.opcode=="crc32c") 00:34:59.371 | "\(.module_name) \(.executed)"' 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2498302 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2498302 ']' 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2498302 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498302 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498302' 00:34:59.371 killing process with pid 2498302 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2498302 00:34:59.371 Received shutdown signal, test time was about 2.000000 seconds 00:34:59.371 00:34:59.371 Latency(us) 00:34:59.371 [2024-12-06T16:01:48.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:59.371 [2024-12-06T16:01:48.064Z] =================================================================================================================== 00:34:59.371 [2024-12-06T16:01:48.064Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2498302 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2498980 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2498980 /var/tmp/bperf.sock 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2498980 ']' 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:59.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:34:59.371 17:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:34:59.371 [2024-12-06 17:01:48.009933] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:34:59.371 [2024-12-06 17:01:48.009989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498980 ] 00:34:59.631 [2024-12-06 17:01:48.073345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:59.631 [2024-12-06 17:01:48.089482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.631 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:59.631 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:34:59.631 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:34:59.631 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:34:59.631 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:34:59.890 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:59.890 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:00.150 nvme0n1 00:35:00.150 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:00.150 17:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:00.150 Running I/O for 2 seconds... 00:35:02.025 30509.00 IOPS, 119.18 MiB/s [2024-12-06T16:01:50.718Z] 30612.50 IOPS, 119.58 MiB/s 00:35:02.025 Latency(us) 00:35:02.025 [2024-12-06T16:01:50.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.025 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:02.025 nvme0n1 : 2.00 30613.15 119.58 0.00 0.00 4175.99 2034.35 12178.77 00:35:02.025 [2024-12-06T16:01:50.718Z] =================================================================================================================== 00:35:02.025 [2024-12-06T16:01:50.718Z] Total : 30613.15 119.58 0.00 0.00 4175.99 2034.35 12178.77 00:35:02.025 { 00:35:02.025 "results": [ 00:35:02.025 { 00:35:02.025 "job": "nvme0n1", 00:35:02.025 "core_mask": "0x2", 00:35:02.025 "workload": "randwrite", 00:35:02.025 "status": "finished", 00:35:02.025 "queue_depth": 128, 00:35:02.025 "io_size": 4096, 00:35:02.025 "runtime": 2.004139, 00:35:02.025 "iops": 30613.14609415814, 00:35:02.025 "mibps": 119.58260193030523, 00:35:02.025 "io_failed": 0, 00:35:02.025 "io_timeout": 0, 00:35:02.025 "avg_latency_us": 4175.989154781892, 00:35:02.025 "min_latency_us": 2034.3466666666666, 00:35:02.025 "max_latency_us": 12178.773333333333 00:35:02.025 } 00:35:02.025 ], 00:35:02.025 "core_count": 1 00:35:02.025 } 00:35:02.025 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:02.025 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:02.025 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:02.026 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:02.026 | select(.opcode=="crc32c") 00:35:02.026 | "\(.module_name) \(.executed)"' 00:35:02.026 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2498980 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2498980 ']' 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2498980 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498980 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498980' 00:35:02.285 killing process with pid 2498980 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2498980 00:35:02.285 Received shutdown signal, test time was about 2.000000 seconds 00:35:02.285 00:35:02.285 Latency(us) 00:35:02.285 [2024-12-06T16:01:50.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:02.285 [2024-12-06T16:01:50.978Z] =================================================================================================================== 00:35:02.285 [2024-12-06T16:01:50.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:02.285 17:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2498980 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2499654 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2499654 /var/tmp/bperf.sock 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2499654 ']' 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:02.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:02.545 [2024-12-06 17:01:51.036033] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:02.545 [2024-12-06 17:01:51.036088] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2499654 ] 00:35:02.545 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:02.545 Zero copy mechanism will not be used. 00:35:02.545 [2024-12-06 17:01:51.099487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.545 [2024-12-06 17:01:51.115603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:02.545 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:02.804 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:02.804 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:03.064 nvme0n1 00:35:03.064 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:03.064 17:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:03.064 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:03.064 Zero copy mechanism will not be used. 00:35:03.064 Running I/O for 2 seconds... 00:35:05.384 3893.00 IOPS, 486.62 MiB/s [2024-12-06T16:01:54.077Z] 3928.00 IOPS, 491.00 MiB/s 00:35:05.384 Latency(us) 00:35:05.384 [2024-12-06T16:01:54.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.384 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:05.384 nvme0n1 : 2.00 3927.14 490.89 0.00 0.00 4067.76 1174.19 6990.51 00:35:05.384 [2024-12-06T16:01:54.077Z] =================================================================================================================== 00:35:05.384 [2024-12-06T16:01:54.077Z] Total : 3927.14 490.89 0.00 0.00 4067.76 1174.19 6990.51 00:35:05.384 { 00:35:05.384 "results": [ 00:35:05.384 { 00:35:05.384 "job": "nvme0n1", 00:35:05.384 "core_mask": "0x2", 00:35:05.384 "workload": "randwrite", 00:35:05.384 "status": "finished", 00:35:05.384 "queue_depth": 16, 00:35:05.384 "io_size": 131072, 00:35:05.384 "runtime": 2.004514, 00:35:05.384 "iops": 3927.1364530255214, 00:35:05.384 "mibps": 490.8920566281902, 00:35:05.384 "io_failed": 0, 00:35:05.384 "io_timeout": 0, 00:35:05.384 "avg_latency_us": 4067.759349593496, 00:35:05.384 "min_latency_us": 1174.1866666666667, 00:35:05.384 "max_latency_us": 6990.506666666667 00:35:05.384 } 00:35:05.384 ], 00:35:05.384 "core_count": 1 00:35:05.384 } 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:05.384 | select(.opcode=="crc32c") 00:35:05.384 | "\(.module_name) \(.executed)"' 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2499654 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2499654 ']' 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2499654 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2499654 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2499654' 00:35:05.384 killing process with pid 2499654 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2499654 00:35:05.384 Received shutdown signal, test time was about 2.000000 seconds 00:35:05.384 00:35:05.384 Latency(us) 00:35:05.384 [2024-12-06T16:01:54.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:05.384 [2024-12-06T16:01:54.077Z] =================================================================================================================== 00:35:05.384 [2024-12-06T16:01:54.077Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2499654 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2497582 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2497582 ']' 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2497582 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.384 17:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497582 00:35:05.384 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:05.384 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:05.384 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497582' 00:35:05.384 killing process with pid 2497582 00:35:05.384 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2497582 00:35:05.384 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2497582 00:35:05.646 00:35:05.646 real 0m12.658s 00:35:05.646 user 0m25.134s 00:35:05.646 sys 0m2.854s 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:05.646 ************************************ 00:35:05.646 END TEST nvmf_digest_clean 00:35:05.646 ************************************ 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:05.646 ************************************ 00:35:05.646 START TEST nvmf_digest_error 00:35:05.646 ************************************ 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2500355 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2500355 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2500355 ']' 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.646 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:05.646 [2024-12-06 17:01:54.226897] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:05.646 [2024-12-06 17:01:54.226946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.646 [2024-12-06 17:01:54.297482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.646 [2024-12-06 17:01:54.311231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.646 [2024-12-06 17:01:54.311259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.646 [2024-12-06 17:01:54.311265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.646 [2024-12-06 17:01:54.311270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.646 [2024-12-06 17:01:54.311275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.646 [2024-12-06 17:01:54.311749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 [2024-12-06 17:01:54.380134] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.906 null0 00:35:05.906 [2024-12-06 17:01:54.449740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.906 [2024-12-06 17:01:54.473941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2500376 00:35:05.906 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2500376 /var/tmp/bperf.sock 00:35:05.907 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2500376 ']' 00:35:05.907 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.907 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.907 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.907 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.907 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:05.907 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:05.907 [2024-12-06 17:01:54.512156] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:05.907 [2024-12-06 17:01:54.512204] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2500376 ] 00:35:05.907 [2024-12-06 17:01:54.575918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.907 [2024-12-06 17:01:54.592274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.167 17:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:06.427 nvme0n1 00:35:06.687 17:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:06.687 17:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.687 17:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:06.687 17:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.687 17:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:06.687 17:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:06.687 Running I/O for 2 seconds... 00:35:06.687 [2024-12-06 17:01:55.225960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.225992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.226001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.236973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.236993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.237000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.249026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.249045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.249053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.260125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.260143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.260150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.268367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.268385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.268392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.277766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.277784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.277791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.289967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.289985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.289992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.302161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.302179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.302185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.313283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.313301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.313312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.321774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.321792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.321798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.330381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.330398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.330405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.339964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.339988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.348609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.348627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.348633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.687 [2024-12-06 17:01:55.356959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.687 [2024-12-06 17:01:55.356976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.687 [2024-12-06 17:01:55.356982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.688 [2024-12-06 17:01:55.366475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.688 [2024-12-06 17:01:55.366492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.688 [2024-12-06 17:01:55.366498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.688 [2024-12-06 17:01:55.375134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.688 [2024-12-06 17:01:55.375151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.688 [2024-12-06 17:01:55.375158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.383308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.383326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.383335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.393272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.393292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.393299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.402149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.402166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.402173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.409949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.409965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.409972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.419370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.419387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.419393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.428581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.428598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.428604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.437023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.437041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.437047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.446618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.446636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.446642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.455258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.455275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.455282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.463754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.463772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.463782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.472732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.472750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.947 [2024-12-06 17:01:55.472756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.947 [2024-12-06 17:01:55.481143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.947 [2024-12-06 17:01:55.481160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.481166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.490119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.490136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.490142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.499150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.499167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.499173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.508434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.508451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.508458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.516997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.517013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.517020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.525293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.525310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.525316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.534156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.534173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.534180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.543784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.543805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.543811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.552465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.552482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.552488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.561066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.561082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.561089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.570842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.570858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.570864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.578672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.578689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.578695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.588132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.588149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.588155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.597041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.597057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.597064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.605892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.605909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.605915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.617527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.617544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.617551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:06.948 [2024-12-06 17:01:55.629035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:06.948 [2024-12-06 17:01:55.629052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:06.948 [2024-12-06 17:01:55.629058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.222 [2024-12-06 17:01:55.639412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.222 [2024-12-06 17:01:55.639429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.222 [2024-12-06 17:01:55.639435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.222 [2024-12-06 17:01:55.647693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.222 [2024-12-06 17:01:55.647711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.647717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.656456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.656473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.656479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.667009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.667026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.667032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.675956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.675972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.675979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.685609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.685626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.685632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.696089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.696109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.696116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.705090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.705111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.705121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.714164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.714181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.714187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.723230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.723248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.723254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.731427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.731444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.731450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.741171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.741188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.741194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.749795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.749812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.749818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.761026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.761043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.761049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.768976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.768992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.768998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.780038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.780055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.780061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.791885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.791906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.791912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.802285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.802302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.802309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.810166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.810183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.821447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.821464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.821472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.831225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.831242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.831248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.841261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.841278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.841284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.849696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.849713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.849720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.860137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.860154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.860160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.870573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.870590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.870596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.880519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.880535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.880541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.888370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.888387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.888394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.223 [2024-12-06 17:01:55.898578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.223 [2024-12-06 17:01:55.898595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.223 [2024-12-06 17:01:55.898601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.909054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.909071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.909078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.919562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.919578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.919584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.928327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.928343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.928350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.937082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.937103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.937110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.947326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.947343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.947349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.955225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.955241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.955251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.964256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.964273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.964279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.972928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.972945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.972952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.981945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.981962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.981968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.990237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.990254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.990260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:55.999435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:55.999452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:55.999458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.008369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.008386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.008392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.017037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.017054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.017060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.025866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.025883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.025889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.034420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.034437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.034443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.043975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.043992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.043998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.051881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.051897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.051904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.061703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.061720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.061727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.070465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.070482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.070488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.079443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.079459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.079465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.088442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.088458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.088464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.099675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.099692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.099698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.109985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.110001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.110011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.118960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.118976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.118982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.127358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.127374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.127381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.136600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.136617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.136624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.145282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.145299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.145306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.154879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.154895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.154902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.163013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.163029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.163036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.172096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.172116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.172122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.180681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.535 [2024-12-06 17:01:56.180697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.535 [2024-12-06 17:01:56.180703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.535 [2024-12-06 17:01:56.189258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.536 [2024-12-06 17:01:56.189278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.536 [2024-12-06 17:01:56.189285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.536 [2024-12-06 17:01:56.199449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.536 [2024-12-06 17:01:56.199466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.536 [2024-12-06 17:01:56.199473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.536 [2024-12-06 17:01:56.207825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.536 [2024-12-06 17:01:56.207843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.536 [2024-12-06 17:01:56.207849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 27106.00 IOPS, 105.88 MiB/s [2024-12-06T16:01:56.541Z] [2024-12-06 17:01:56.216828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.216845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.216851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.226302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.226318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.226325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.237326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.237343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.237350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.246625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.246642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.246649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.255513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.255529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.255536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.264663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.264679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.264686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.273336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.273353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.273359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.281989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.282005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.282012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.290674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.290691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.290697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.300330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.300348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.300354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.309024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.309041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.309047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.318806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.318823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.318829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.329091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.329111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.329117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.339273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.339290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.339296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.347406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.347426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.347432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.358631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.358648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.358655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.367844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.367861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.367867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.376635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.376651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.376658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.385306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.385324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.385330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.396178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.396195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.396201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.407503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.407520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.407526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.416294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.416311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.416317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.424326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.424343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.424350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.433225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.433242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.433248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.442220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.442237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.442243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.451044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.451061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.451067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.460508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.460524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.460530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.467847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.467869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.478245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.478261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.478268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.486811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.486828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.486834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.497345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.497363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.497369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.506776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.506792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.506802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.515634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.515651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.515657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.524770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.524787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.524793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:07.848 [2024-12-06 17:01:56.533308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:07.848 [2024-12-06 17:01:56.533326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:07.848 [2024-12-06 17:01:56.533332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.106 [2024-12-06 17:01:56.544623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.106 [2024-12-06 17:01:56.544641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.544648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.554749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.554766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.554772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.564062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.564079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.564086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.574947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.574965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.574972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.586895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.586913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.586919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.596232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.596253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.596260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.604600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.604617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.604623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.613510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.613527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.613534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.622624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.622641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.622648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.630972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.630989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.630996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.640351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.640368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.640375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.649040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.649057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.649064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.658611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.658628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.658635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.667443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.667460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.667466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.675457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.675474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.675481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.684912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.684929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.684935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.694364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.694381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.694388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.702747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.702765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.702771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.712140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.712157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.712163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.721665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.721682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.721688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.730330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.730346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.730353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.739500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.739517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.739524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.749380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.749401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.749407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.757928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.757946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.757953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.768132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.768148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.768155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.776997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.777015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.777021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.788282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.788300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.788306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.107 [2024-12-06 17:01:56.797718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.107 [2024-12-06 17:01:56.797735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.107 [2024-12-06 17:01:56.797741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.807276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.807294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.807300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.815453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.815471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.815477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.825060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.825077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.825084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.834465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.834483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.834489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.842957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.842974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.842981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.851957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.851974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.851980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.861212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.861229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.861235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.872963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.872980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.872986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.884586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.884603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.884609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.893173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.893190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.893196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.902524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.902541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.902548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.911999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.912016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.912026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.921537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.921555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.921562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.929946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.367 [2024-12-06 17:01:56.929964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.367 [2024-12-06 17:01:56.929970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.367 [2024-12-06 17:01:56.938736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:56.938753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:56.938759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:56.950464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:56.950481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:56.950487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:56.962848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:56.962865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:56.962872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:56.973312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:56.973330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:56.973337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:56.980556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:56.980573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:56.980580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:56.990906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:56.990923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:56.990930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:56.999609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:56.999629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:56.999635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:57.008160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:57.008178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:57.008184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:57.017574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:57.017590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:57.017597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:57.026121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:57.026138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:57.026145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:57.035130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:57.035147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:57.035153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:57.043436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:57.043454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:57.043460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.368 [2024-12-06 17:01:57.052240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.368 [2024-12-06 17:01:57.052257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.368 [2024-12-06 17:01:57.052263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.627 [2024-12-06 17:01:57.061473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.627 [2024-12-06 17:01:57.061491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.627 [2024-12-06 17:01:57.061497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.627 [2024-12-06 17:01:57.069808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.627 [2024-12-06 17:01:57.069825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.627 [2024-12-06 17:01:57.069832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.627 [2024-12-06 17:01:57.079095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.627 [2024-12-06 17:01:57.079116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.627 [2024-12-06 17:01:57.079122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.627 [2024-12-06 17:01:57.088160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.088177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.088184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.099332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.099350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.099356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.107386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.107403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.107410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.117086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.117107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.117113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.125832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.125848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.125855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.134609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.134626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.134632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.142678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.142695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.142701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.152153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.152170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.152180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.161329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.161346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.161353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.170332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.170348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.170355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.178023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.178040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.178046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.188573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.188590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.188597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.198323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.198340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.198346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 [2024-12-06 17:01:57.206168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.206185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.206191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 27246.00 IOPS, 106.43 MiB/s [2024-12-06T16:01:57.321Z] [2024-12-06 17:01:57.215721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ffb0) 00:35:08.628 [2024-12-06 17:01:57.215738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:08.628 [2024-12-06 17:01:57.215744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:08.628 00:35:08.628 Latency(us) 00:35:08.628 [2024-12-06T16:01:57.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.628 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:08.628 nvme0n1 : 2.00 27261.53 106.49 0.00 0.00 4689.45 2048.00 14745.60 00:35:08.628 [2024-12-06T16:01:57.321Z] =================================================================================================================== 00:35:08.628 [2024-12-06T16:01:57.321Z] Total : 27261.53 106.49 0.00 0.00 4689.45 2048.00 14745.60 00:35:08.628 { 00:35:08.628 "results": [ 00:35:08.628 { 00:35:08.628 "job": "nvme0n1", 00:35:08.628 "core_mask": "0x2", 00:35:08.628 "workload": "randread", 00:35:08.628 "status": "finished", 00:35:08.628 "queue_depth": 128, 00:35:08.628 "io_size": 4096, 00:35:08.628 "runtime": 2.003556, 00:35:08.628 "iops": 27261.529001435447, 00:35:08.628 "mibps": 106.49034766185721, 00:35:08.628 "io_failed": 0, 00:35:08.628 "io_timeout": 0, 00:35:08.628 "avg_latency_us": 4689.453056999878, 00:35:08.628 "min_latency_us": 2048.0, 00:35:08.628 "max_latency_us": 14745.6 00:35:08.628 } 00:35:08.628 ], 00:35:08.628 "core_count": 1 00:35:08.628 } 00:35:08.628 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:08.628 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:08.628 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:08.628 | .driver_specific 00:35:08.628 | .nvme_error 00:35:08.628 | .status_code 00:35:08.628 | .command_transient_transport_error' 00:35:08.628 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 214 > 0 )) 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2500376 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2500376 ']' 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2500376 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500376 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500376' 00:35:08.888 killing process with pid 2500376 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2500376 00:35:08.888 Received shutdown signal, test time was about 2.000000 seconds 00:35:08.888 00:35:08.888 Latency(us) 00:35:08.888 [2024-12-06T16:01:57.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.888 [2024-12-06T16:01:57.581Z] =================================================================================================================== 00:35:08.888 [2024-12-06T16:01:57.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2500376 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2501060 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2501060 /var/tmp/bperf.sock 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2501060 ']' 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:08.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:08.888 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:08.888 [2024-12-06 17:01:57.564540] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:08.888 [2024-12-06 17:01:57.564596] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501060 ] 00:35:08.888 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:08.888 Zero copy mechanism will not be used. 00:35:09.148 [2024-12-06 17:01:57.626988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.148 [2024-12-06 17:01:57.643224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:09.148 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:09.148 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:09.148 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:09.148 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:09.408 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:09.408 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.408 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.408 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.408 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.408 17:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:09.668 nvme0n1 00:35:09.668 17:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:09.668 17:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.668 17:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:09.668 17:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.668 17:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:09.668 17:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:09.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:09.668 Zero copy mechanism will not be used. 00:35:09.668 Running I/O for 2 seconds... 00:35:09.668 [2024-12-06 17:01:58.251119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.668 [2024-12-06 17:01:58.251152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.668 [2024-12-06 17:01:58.251161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.668 [2024-12-06 17:01:58.255619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.668 [2024-12-06 17:01:58.255642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.668 [2024-12-06 17:01:58.255650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.668 [2024-12-06 17:01:58.260615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.668 [2024-12-06 17:01:58.260635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.668 [2024-12-06 17:01:58.260642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.668 [2024-12-06 17:01:58.265628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.668 [2024-12-06 17:01:58.265646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.668 [2024-12-06 17:01:58.265654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.668 [2024-12-06 17:01:58.268147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.668 [2024-12-06 17:01:58.268165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.268172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.271689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.271708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.271714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.278747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.278766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.278772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.283913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.283931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.283937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.289129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.289147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.289158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.296825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.296843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.296850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.301909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.301927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.301934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.305156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.305174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.305180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.308570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.308588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.308594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.313583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.313601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.313608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.323503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.323521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.323528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.332608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.332627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.332633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.342216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.342234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.342241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.669 [2024-12-06 17:01:58.352793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.669 [2024-12-06 17:01:58.352815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.669 [2024-12-06 17:01:58.352821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.361322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.361341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.361348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.365720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.365738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.365745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.370110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.370128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.370134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.374247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.374264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.374271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.378120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.378137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.378143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.384092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.384115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.384121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.389107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.389124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.389130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.391149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.391166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.391173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.394272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.394289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.394296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.398248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.398266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.398273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.407187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.407205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.407212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.415706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.415724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.415730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.426622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.426639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.426646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.437602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.437621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.437627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.448442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.448460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.930 [2024-12-06 17:01:58.448467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.930 [2024-12-06 17:01:58.459016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.930 [2024-12-06 17:01:58.459035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.459041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.470553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.470571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.470581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.481908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.481927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.481933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.492881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.492899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.492906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.503098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.503121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.503128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.513359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.513378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.513385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.525014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.525033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.525039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.536790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.536808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.536814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.545968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.545986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.545993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.553904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.553922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.553929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.562020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.562042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.562048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.568727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.568745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.568752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.579250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.579268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.579274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.587386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.587404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.587411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.594755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.594772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.594779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.602533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.602551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.602557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.608045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.608063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.608069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.611686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.611703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.611709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:09.931 [2024-12-06 17:01:58.619395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:09.931 [2024-12-06 17:01:58.619412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:09.931 [2024-12-06 17:01:58.619418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.192 [2024-12-06 17:01:58.627255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.192 [2024-12-06 17:01:58.627273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.192 [2024-12-06 17:01:58.627279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.192 [2024-12-06 17:01:58.630629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.192 [2024-12-06 17:01:58.630647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.192 [2024-12-06 17:01:58.630653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.192 [2024-12-06 17:01:58.634335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.192 [2024-12-06 17:01:58.634353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.192 [2024-12-06 17:01:58.634359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.192 [2024-12-06 17:01:58.641471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.192 [2024-12-06 17:01:58.641488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.192 [2024-12-06 17:01:58.641495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.192 [2024-12-06 17:01:58.647646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.192 [2024-12-06 17:01:58.647662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.192 [2024-12-06 17:01:58.647669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.192 [2024-12-06 17:01:58.654925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.192 [2024-12-06 17:01:58.654942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.192 [2024-12-06 17:01:58.654949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.661352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.661370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.661376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.668708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.668725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.668731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.672204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.672221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.672232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.675665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.675682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.675688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.681593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.681610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.681617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.684951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.684968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.684974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.688124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.688141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.688147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.691558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.691575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.691582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.694666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.694684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.694690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.701052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.701070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.701076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.710410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.710428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.710434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.720208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.720225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.720231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.729955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.729972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.729979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.739182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.739200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.739206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.749654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.749672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.749678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.760417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.760435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.760441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.771576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.771594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.771601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.780065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.780083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.780090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.783697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.783715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.783721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.791297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.791315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.791325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.800593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.800611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.800617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.811411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.811428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.811435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.821606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.821623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.821630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.831407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.831424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.831431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.840669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.840687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.840693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.847957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.847975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.847981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.193 [2024-12-06 17:01:58.853557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.193 [2024-12-06 17:01:58.853575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.193 [2024-12-06 17:01:58.853582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.194 [2024-12-06 17:01:58.859664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.194 [2024-12-06 17:01:58.859682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.194 [2024-12-06 17:01:58.859688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.194 [2024-12-06 17:01:58.865388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.194 [2024-12-06 17:01:58.865409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.194 [2024-12-06 17:01:58.865415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.194 [2024-12-06 17:01:58.874685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.194 [2024-12-06 17:01:58.874703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.194 [2024-12-06 17:01:58.874710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.194 [2024-12-06 17:01:58.882771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.194 [2024-12-06 17:01:58.882788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.194 [2024-12-06 17:01:58.882795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.889932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.889951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.889957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.894682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.894700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.894706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.899600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.899618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.899625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.904450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.904468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.904475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.913621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.913637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.913644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.922703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.922721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.922728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.929693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.929711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.929717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.932969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.932987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.932993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.936330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.936347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.936354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.945296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.945314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.945320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.953587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.953605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.953611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.960267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.455 [2024-12-06 17:01:58.960284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.455 [2024-12-06 17:01:58.960291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.455 [2024-12-06 17:01:58.963081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:58.963098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:58.963109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:58.969560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:58.969578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:58.969584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:58.975778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:58.975795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:58.975806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:58.982409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:58.982427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:58.982434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:58.991095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:58.991118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:58.991124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.002494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.002512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.002518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.011930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.011948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.011954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.021694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.021711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.021718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.028426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.028443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.028450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.034540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.034558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.034564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.037901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.037919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.037926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.044975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.044996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.045002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.051178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.051196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.051203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.061434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.061451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.061458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.072725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.072743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.083170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.083189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.083195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.093626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.093643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.093649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.099158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.099176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.099182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.102359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.102376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.102383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.106204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.106222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.106228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.113055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.113072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.113079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.119136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.119153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.119159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.124898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.124915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.124921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.131990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.132007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.132013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.135469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.135486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.135492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.141849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.141866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.141872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.456 [2024-12-06 17:01:59.145755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.456 [2024-12-06 17:01:59.145772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.456 [2024-12-06 17:01:59.145778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.151866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.151884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.151890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.157588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.157605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.157615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.161653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.161670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.161676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.171194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.171212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.171218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.175394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.175411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.175418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.183566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.183584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.183590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.191801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.191819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.191825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.195756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.195774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.195780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.199155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.199172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.199178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.203651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.203668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.203674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.210887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.210908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.210914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.215647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.215665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.215671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.219303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.219320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.219326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.223036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.223053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.223059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.228236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.228254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.228260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.231811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.231829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.231835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.236712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.236730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.236736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.246817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.246836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.717 4482.00 IOPS, 560.25 MiB/s [2024-12-06T16:01:59.410Z] [2024-12-06 17:01:59.251733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.251751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.251757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.256887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.256904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.256911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.266779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.266796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.266803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.717 [2024-12-06 17:01:59.278112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.717 [2024-12-06 17:01:59.278130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.717 [2024-12-06 17:01:59.278136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.289326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.289344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.289350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.300543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.300561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.300568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.307398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.307415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.307421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.316281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.316299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.316305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.323259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.323277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.323283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.331679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.331696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.331706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.340134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.340152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.340158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.347808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.347825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.347831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.358558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.358576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.358583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.369754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.369772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.369778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.381049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.381067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.381073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.392423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.392440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.392447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.718 [2024-12-06 17:01:59.403484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.718 [2024-12-06 17:01:59.403502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.718 [2024-12-06 17:01:59.403508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.414627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.414645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.414651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.425643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.425660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.425667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.433779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.433796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.433803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.438774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.438792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.438798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.443001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.443018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.443025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.448568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.448586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.448592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.454770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.454788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.454794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.464787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.464805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.464811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.473523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.473541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.473548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.480033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.480050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.480059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.491013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.491030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.491037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.498025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.498043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.498049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.503540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.503559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.503565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.512685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.512702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.512709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.522651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.522669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.522675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.532924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.532941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.532948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.541033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.541050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.541057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.550829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.550846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.550853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.561603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.561624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.561630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.572769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.572788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.572795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.582556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.582574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.582581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.593063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.593082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.593089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.603587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.603605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.603611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.613481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.613498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.613505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.623915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.623933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.623940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.634074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.634092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.634098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.643117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.643135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.643141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.653338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.653356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.653363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:10.978 [2024-12-06 17:01:59.661797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:10.978 [2024-12-06 17:01:59.661815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:10.978 [2024-12-06 17:01:59.661822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.671670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.671688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.671694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.681216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.681234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.681241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.690928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.690946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.690952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.700863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.700880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.700887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.711464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.711482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.711489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.722823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.722841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.722848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.732623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.732642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.732652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.742600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.742618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.742624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.752124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.752143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.752149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.762410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.762428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.762434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.773282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.239 [2024-12-06 17:01:59.773300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.239 [2024-12-06 17:01:59.773307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.239 [2024-12-06 17:01:59.784455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.784473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.784479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.795235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.795253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.795259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.806630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.806649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.806655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.817489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.817507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.817514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.826474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.826495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.826501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.834782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.834800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.834807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.844095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.844117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.844123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.851474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.851492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.851499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.857887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.857905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.857911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.864186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.864204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.864210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.874382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.874401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.874407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.884361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.884378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.884385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.894898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.894916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.894922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.905592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.905609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.905616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.915372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.915389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.915396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.240 [2024-12-06 17:01:59.921488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.240 [2024-12-06 17:01:59.921505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.240 [2024-12-06 17:01:59.921512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:01:59.931573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:01:59.931592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:01:59.931598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:01:59.942798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:01:59.942816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:01:59.942822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:01:59.953420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:01:59.953438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:01:59.953445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:01:59.963856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:01:59.963874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:01:59.963881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:01:59.974588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:01:59.974606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:01:59.974613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:01:59.985169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:01:59.985187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:01:59.985197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:01:59.996937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:01:59.996955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:01:59.996961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.007411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.007431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.007438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.017832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.017851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.017858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.029225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.029244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.029250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.041584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.041603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.041610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.053018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.053036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.053043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.065502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.065520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.065526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.077261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.077279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.077285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.087793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.087816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.087822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.098804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.098822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.098829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.109136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.109154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.501 [2024-12-06 17:02:00.109161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.501 [2024-12-06 17:02:00.112834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.501 [2024-12-06 17:02:00.112852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.112859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.117639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.117658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.117664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.123689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.123707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.123714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.127405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.127423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.127431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.130224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.130242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.130249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.133840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.133859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.133869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.138289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.138307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.138314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.149059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.149077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.149084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.160075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.160094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.160105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.170675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.170693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.170699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.502 [2024-12-06 17:02:00.182550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.502 [2024-12-06 17:02:00.182568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.502 [2024-12-06 17:02:00.182574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.762 [2024-12-06 17:02:00.193619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.762 [2024-12-06 17:02:00.193638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.762 [2024-12-06 17:02:00.193644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.762 [2024-12-06 17:02:00.203775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.762 [2024-12-06 17:02:00.203793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.762 [2024-12-06 17:02:00.203800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.762 [2024-12-06 17:02:00.214245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.762 [2024-12-06 17:02:00.214263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.762 [2024-12-06 17:02:00.214270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:11.762 [2024-12-06 17:02:00.225229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.762 [2024-12-06 17:02:00.225254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.762 [2024-12-06 17:02:00.225260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:11.762 [2024-12-06 17:02:00.236584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.762 [2024-12-06 17:02:00.236602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.762 [2024-12-06 17:02:00.236609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:11.762 3905.00 IOPS, 488.12 MiB/s [2024-12-06T16:02:00.455Z] [2024-12-06 17:02:00.249225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xed5300) 00:35:11.762 [2024-12-06 17:02:00.249240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.762 [2024-12-06 17:02:00.249246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:11.762 00:35:11.762 Latency(us) 00:35:11.762 [2024-12-06T16:02:00.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:11.762 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:11.762 nvme0n1 : 2.00 3908.57 488.57 0.00 0.00 4089.19 505.17 12561.07 00:35:11.762 [2024-12-06T16:02:00.455Z] =================================================================================================================== 00:35:11.762 [2024-12-06T16:02:00.455Z] Total : 3908.57 488.57 0.00 0.00 4089.19 505.17 12561.07 00:35:11.762 { 00:35:11.762 "results": [ 00:35:11.762 { 00:35:11.762 "job": "nvme0n1", 00:35:11.762 "core_mask": "0x2", 00:35:11.762 "workload": "randread", 00:35:11.762 "status": "finished", 00:35:11.762 "queue_depth": 16, 00:35:11.762 "io_size": 131072, 00:35:11.762 "runtime": 2.002268, 00:35:11.762 "iops": 3908.567684246065, 00:35:11.762 "mibps": 488.5709605307581, 00:35:11.762 "io_failed": 0, 00:35:11.762 "io_timeout": 0, 00:35:11.762 "avg_latency_us": 4089.189034841128, 00:35:11.762 "min_latency_us": 505.17333333333335, 00:35:11.762 "max_latency_us": 12561.066666666668 00:35:11.762 } 00:35:11.762 ], 00:35:11.762 "core_count": 1 00:35:11.762 } 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:11.762 | .driver_specific 00:35:11.762 | .nvme_error 00:35:11.762 | .status_code 00:35:11.762 | .command_transient_transport_error' 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 253 > 0 )) 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2501060 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2501060 ']' 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2501060 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:11.762 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501060 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501060' 00:35:12.021 killing process with pid 2501060 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2501060 00:35:12.021 Received shutdown signal, test time was about 2.000000 seconds 00:35:12.021 00:35:12.021 Latency(us) 00:35:12.021 [2024-12-06T16:02:00.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:12.021 [2024-12-06T16:02:00.714Z] =================================================================================================================== 00:35:12.021 [2024-12-06T16:02:00.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2501060 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2501734 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2501734 /var/tmp/bperf.sock 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2501734 ']' 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:12.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:12.021 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.021 [2024-12-06 17:02:00.581066] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:12.021 [2024-12-06 17:02:00.581118] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2501734 ] 00:35:12.021 [2024-12-06 17:02:00.635872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:12.021 [2024-12-06 17:02:00.652082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.281 17:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:12.542 nvme0n1 00:35:12.542 17:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:12.542 17:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.542 17:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:12.542 17:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.542 17:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:12.542 17:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:12.542 Running I/O for 2 seconds... 00:35:12.542 [2024-12-06 17:02:01.196576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef46d0 00:35:12.542 [2024-12-06 17:02:01.197382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.542 [2024-12-06 17:02:01.197409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:12.542 [2024-12-06 17:02:01.205358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee3d08 00:35:12.542 [2024-12-06 17:02:01.206313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.542 [2024-12-06 17:02:01.206331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.542 [2024-12-06 17:02:01.213985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee5ec8 00:35:12.542 [2024-12-06 17:02:01.214793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.542 [2024-12-06 17:02:01.214810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.542 [2024-12-06 17:02:01.222447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee8088 00:35:12.542 [2024-12-06 17:02:01.223289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.542 [2024-12-06 17:02:01.223306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.542 [2024-12-06 17:02:01.231002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef20d8 00:35:12.542 [2024-12-06 17:02:01.231819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.542 [2024-12-06 17:02:01.231836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.239452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeff18 00:35:12.803 [2024-12-06 17:02:01.240307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.240323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.247886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eedd58 00:35:12.803 [2024-12-06 17:02:01.248694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.248711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.256310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eebb98 00:35:12.803 [2024-12-06 17:02:01.257123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.257139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.264720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee99d8 00:35:12.803 [2024-12-06 17:02:01.265520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.265537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.273136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee2c28 00:35:12.803 [2024-12-06 17:02:01.273916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.273932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.281545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee4de8 00:35:12.803 [2024-12-06 17:02:01.282344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.282360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.289961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6fa8 00:35:12.803 [2024-12-06 17:02:01.290762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.290778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.298372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee9168 00:35:12.803 [2024-12-06 17:02:01.299132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.299149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.306775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0ff8 00:35:12.803 [2024-12-06 17:02:01.307578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.307594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.315170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeee38 00:35:12.803 [2024-12-06 17:02:01.315970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.315986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.323564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eecc78 00:35:12.803 [2024-12-06 17:02:01.324370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.324386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.331971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeaab8 00:35:12.803 [2024-12-06 17:02:01.332745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.332761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.803 [2024-12-06 17:02:01.340389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee1b48 00:35:12.803 [2024-12-06 17:02:01.341214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.803 [2024-12-06 17:02:01.341231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.348794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee3d08 00:35:12.804 [2024-12-06 17:02:01.349563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.349580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.357203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee5ec8 00:35:12.804 [2024-12-06 17:02:01.358004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.358020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.365588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee8088 00:35:12.804 [2024-12-06 17:02:01.366397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.366413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.373989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef20d8 00:35:12.804 [2024-12-06 17:02:01.374795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.374811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.382397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeff18 00:35:12.804 [2024-12-06 17:02:01.383227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.383246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.390809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eedd58 00:35:12.804 [2024-12-06 17:02:01.391613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.391630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.399211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eebb98 00:35:12.804 [2024-12-06 17:02:01.399971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.399987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.407601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee99d8 00:35:12.804 [2024-12-06 17:02:01.408400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.408417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.415983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee2c28 00:35:12.804 [2024-12-06 17:02:01.416787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.416803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.424691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee0630 00:35:12.804 [2024-12-06 17:02:01.425612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.425628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.432694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efc998 00:35:12.804 [2024-12-06 17:02:01.433491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.433507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.440585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efc128 00:35:12.804 [2024-12-06 17:02:01.441261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.441277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.449258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef20d8 00:35:12.804 [2024-12-06 17:02:01.449941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.449957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.457642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef31b8 00:35:12.804 [2024-12-06 17:02:01.458338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.458354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.466039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efa3a0 00:35:12.804 [2024-12-06 17:02:01.466697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.466713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.474451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee23b8 00:35:12.804 [2024-12-06 17:02:01.475136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.475152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.482857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee12d8 00:35:12.804 [2024-12-06 17:02:01.483555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.483571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:12.804 [2024-12-06 17:02:01.491275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef81e0 00:35:12.804 [2024-12-06 17:02:01.491966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:12.804 [2024-12-06 17:02:01.491982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.499668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7100 00:35:13.065 [2024-12-06 17:02:01.500369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.500385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.508059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef6020 00:35:13.065 [2024-12-06 17:02:01.508753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.508770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.516473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef4f40 00:35:13.065 [2024-12-06 17:02:01.517131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.517147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.524884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef3e60 00:35:13.065 [2024-12-06 17:02:01.525579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.525595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.533305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8e88 00:35:13.065 [2024-12-06 17:02:01.533993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.534009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.541693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef9f68 00:35:13.065 [2024-12-06 17:02:01.542379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.542395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.550073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eec840 00:35:13.065 [2024-12-06 17:02:01.550764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.550780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.558474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eed920 00:35:13.065 [2024-12-06 17:02:01.559136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.559153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.566881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeea00 00:35:13.065 [2024-12-06 17:02:01.567576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.567592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.575285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eefae0 00:35:13.065 [2024-12-06 17:02:01.575976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.575992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.583684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0bc0 00:35:13.065 [2024-12-06 17:02:01.584395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.584411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.592069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef1ca0 00:35:13.065 [2024-12-06 17:02:01.592761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.592777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.600462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb048 00:35:13.065 [2024-12-06 17:02:01.601135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.601154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.608880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eea680 00:35:13.065 [2024-12-06 17:02:01.609577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.609593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.617295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeb760 00:35:13.065 [2024-12-06 17:02:01.617986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.618002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.625692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8618 00:35:13.065 [2024-12-06 17:02:01.626385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.626401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.634094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7538 00:35:13.065 [2024-12-06 17:02:01.634808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.634824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.642491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef6458 00:35:13.065 [2024-12-06 17:02:01.643138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.643154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.650888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:13.065 [2024-12-06 17:02:01.651540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:28 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.651557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.659297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef4298 00:35:13.065 [2024-12-06 17:02:01.659984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.660000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.065 [2024-12-06 17:02:01.667702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb8b8 00:35:13.065 [2024-12-06 17:02:01.668388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.065 [2024-12-06 17:02:01.668405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.676093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef9b30 00:35:13.066 [2024-12-06 17:02:01.676780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.676799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.684478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee88f8 00:35:13.066 [2024-12-06 17:02:01.685161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.685177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.692856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eedd58 00:35:13.066 [2024-12-06 17:02:01.693547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.693563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.701256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeee38 00:35:13.066 [2024-12-06 17:02:01.701942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.701959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.709655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeff18 00:35:13.066 [2024-12-06 17:02:01.710356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.710372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.718054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0ff8 00:35:13.066 [2024-12-06 17:02:01.718748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.718764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.726435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef20d8 00:35:13.066 [2024-12-06 17:02:01.727145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.727161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.734811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef31b8 00:35:13.066 [2024-12-06 17:02:01.735465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.735481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.743286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efa3a0 00:35:13.066 [2024-12-06 17:02:01.743975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.743992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.066 [2024-12-06 17:02:01.751676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee23b8 00:35:13.066 [2024-12-06 17:02:01.752340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.066 [2024-12-06 17:02:01.752356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.760076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee12d8 00:35:13.327 [2024-12-06 17:02:01.760767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.760784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.768470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef81e0 00:35:13.327 [2024-12-06 17:02:01.769135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.769151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.776853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7100 00:35:13.327 [2024-12-06 17:02:01.777542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.777558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.785242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef6020 00:35:13.327 [2024-12-06 17:02:01.785934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.785950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.793637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef4f40 00:35:13.327 [2024-12-06 17:02:01.794325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.794341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.802034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef3e60 00:35:13.327 [2024-12-06 17:02:01.802723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.802739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.810426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8e88 00:35:13.327 [2024-12-06 17:02:01.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.811148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.818803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef9f68 00:35:13.327 [2024-12-06 17:02:01.819488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.819504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.827193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eec840 00:35:13.327 [2024-12-06 17:02:01.827881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.827897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.835589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eed920 00:35:13.327 [2024-12-06 17:02:01.836283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.836300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.843989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeea00 00:35:13.327 [2024-12-06 17:02:01.844679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.844694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.852402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eefae0 00:35:13.327 [2024-12-06 17:02:01.853116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.853132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.860791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0bc0 00:35:13.327 [2024-12-06 17:02:01.861485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.861501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.869177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef1ca0 00:35:13.327 [2024-12-06 17:02:01.869872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.869888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.877579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb048 00:35:13.327 [2024-12-06 17:02:01.878267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.878283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.885999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eea680 00:35:13.327 [2024-12-06 17:02:01.886695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.886711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.894412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeb760 00:35:13.327 [2024-12-06 17:02:01.895110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.895128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.902800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8618 00:35:13.327 [2024-12-06 17:02:01.903490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.903506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.911188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7538 00:35:13.327 [2024-12-06 17:02:01.911882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.911897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.919575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef6458 00:35:13.327 [2024-12-06 17:02:01.920283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.920298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.927971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:13.327 [2024-12-06 17:02:01.928667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.928683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.936375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef4298 00:35:13.327 [2024-12-06 17:02:01.937084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.937102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.944776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb8b8 00:35:13.327 [2024-12-06 17:02:01.945526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.945543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.327 [2024-12-06 17:02:01.953171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef9b30 00:35:13.327 [2024-12-06 17:02:01.953844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.327 [2024-12-06 17:02:01.953860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.328 [2024-12-06 17:02:01.961551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee88f8 00:35:13.328 [2024-12-06 17:02:01.962267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.328 [2024-12-06 17:02:01.962282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.328 [2024-12-06 17:02:01.969936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eedd58 00:35:13.328 [2024-12-06 17:02:01.970635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.328 [2024-12-06 17:02:01.970651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.328 [2024-12-06 17:02:01.978346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeee38 00:35:13.328 [2024-12-06 17:02:01.979028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.328 [2024-12-06 17:02:01.979044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.328 [2024-12-06 17:02:01.986740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeff18 00:35:13.328 [2024-12-06 17:02:01.987454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.328 [2024-12-06 17:02:01.987469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.328 [2024-12-06 17:02:01.995127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0ff8 00:35:13.328 [2024-12-06 17:02:01.995845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.328 [2024-12-06 17:02:01.995861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.328 [2024-12-06 17:02:02.003507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef20d8 00:35:13.328 [2024-12-06 17:02:02.004225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.328 [2024-12-06 17:02:02.004241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.328 [2024-12-06 17:02:02.011889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef31b8 00:35:13.328 [2024-12-06 17:02:02.012606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.328 [2024-12-06 17:02:02.012621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.589 [2024-12-06 17:02:02.020293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efa3a0 00:35:13.589 [2024-12-06 17:02:02.020986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.589 [2024-12-06 17:02:02.021002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.589 [2024-12-06 17:02:02.028696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee23b8 00:35:13.589 [2024-12-06 17:02:02.029398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.589 [2024-12-06 17:02:02.029414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.589 [2024-12-06 17:02:02.037107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee12d8 00:35:13.589 [2024-12-06 17:02:02.037795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.589 [2024-12-06 17:02:02.037811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.589 [2024-12-06 17:02:02.045487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef81e0 00:35:13.589 [2024-12-06 17:02:02.046180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.589 [2024-12-06 17:02:02.046196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.589 [2024-12-06 17:02:02.053874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7100 00:35:13.589 [2024-12-06 17:02:02.054566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.589 [2024-12-06 17:02:02.054582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.589 [2024-12-06 17:02:02.062267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef6020 00:35:13.589 [2024-12-06 17:02:02.062963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.589 [2024-12-06 17:02:02.062979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.589 [2024-12-06 17:02:02.070672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef4f40 00:35:13.589 [2024-12-06 17:02:02.071342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.071358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.079066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef3e60 00:35:13.590 [2024-12-06 17:02:02.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.079780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.087455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8e88 00:35:13.590 [2024-12-06 17:02:02.088136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.088152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.095830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef9f68 00:35:13.590 [2024-12-06 17:02:02.096523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.096539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.104211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eec840 00:35:13.590 [2024-12-06 17:02:02.104896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.104912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.112611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eed920 00:35:13.590 [2024-12-06 17:02:02.113289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.113308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.121004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeea00 00:35:13.590 [2024-12-06 17:02:02.121691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.121707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.129403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eefae0 00:35:13.590 [2024-12-06 17:02:02.130111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.130127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.137793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0bc0 00:35:13.590 [2024-12-06 17:02:02.138486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.138502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.146173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef1ca0 00:35:13.590 [2024-12-06 17:02:02.146856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.146872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.154568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb048 00:35:13.590 [2024-12-06 17:02:02.155285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.155300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.162967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eea680 00:35:13.590 [2024-12-06 17:02:02.163655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.163671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.171366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeb760 00:35:13.590 [2024-12-06 17:02:02.172079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.179750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8618 00:35:13.590 [2024-12-06 17:02:02.180437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.180454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.188130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7538 00:35:13.590 [2024-12-06 17:02:02.189043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.189060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:13.590 30276.00 IOPS, 118.27 MiB/s [2024-12-06T16:02:02.283Z] [2024-12-06 17:02:02.196507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:13.590 [2024-12-06 17:02:02.197201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.197217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.204909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef35f0 00:35:13.590 [2024-12-06 17:02:02.205751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.205768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.213472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6300 00:35:13.590 [2024-12-06 17:02:02.214136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.214152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.221867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eef270 00:35:13.590 [2024-12-06 17:02:02.222522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.222538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.230249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef1430 00:35:13.590 [2024-12-06 17:02:02.230939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.230955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.238752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef2d80 00:35:13.590 [2024-12-06 17:02:02.239451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.239467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.247154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eebfd0 00:35:13.590 [2024-12-06 17:02:02.247845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.247861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.255552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7da8 00:35:13.590 [2024-12-06 17:02:02.256235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.256251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.263957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5be8 00:35:13.590 [2024-12-06 17:02:02.264609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.264626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.272346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeaef0 00:35:13.590 [2024-12-06 17:02:02.273037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.590 [2024-12-06 17:02:02.273053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.590 [2024-12-06 17:02:02.280740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8e88 00:35:13.852 [2024-12-06 17:02:02.281437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.281453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.289136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eec840 00:35:13.852 [2024-12-06 17:02:02.289820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.289836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.297538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeea00 00:35:13.852 [2024-12-06 17:02:02.298246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.298262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.305935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0bc0 00:35:13.852 [2024-12-06 17:02:02.306588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.306604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.314334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb048 00:35:13.852 [2024-12-06 17:02:02.315021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.315036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.322756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeb760 00:35:13.852 [2024-12-06 17:02:02.323412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.323428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.331146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7538 00:35:13.852 [2024-12-06 17:02:02.331833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.331852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.339553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:13.852 [2024-12-06 17:02:02.340289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.340305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.347960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef35f0 00:35:13.852 [2024-12-06 17:02:02.348649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.348665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.356372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6300 00:35:13.852 [2024-12-06 17:02:02.357083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.357098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.364752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eef270 00:35:13.852 [2024-12-06 17:02:02.365464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.365481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.373137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef1430 00:35:13.852 [2024-12-06 17:02:02.373824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.373840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.381529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef2d80 00:35:13.852 [2024-12-06 17:02:02.382249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.382264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.389924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eebfd0 00:35:13.852 [2024-12-06 17:02:02.390613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.390629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.398321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7da8 00:35:13.852 [2024-12-06 17:02:02.399011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.399027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.406709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5be8 00:35:13.852 [2024-12-06 17:02:02.407409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.407425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.415104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeaef0 00:35:13.852 [2024-12-06 17:02:02.415803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.415820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.423487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8e88 00:35:13.852 [2024-12-06 17:02:02.424136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.424152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.431884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eec840 00:35:13.852 [2024-12-06 17:02:02.432577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.432593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.440284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeea00 00:35:13.852 [2024-12-06 17:02:02.440973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.440989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.852 [2024-12-06 17:02:02.448748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0bc0 00:35:13.852 [2024-12-06 17:02:02.449442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.852 [2024-12-06 17:02:02.449458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.457121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb048 00:35:13.853 [2024-12-06 17:02:02.457789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.457805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.465505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeb760 00:35:13.853 [2024-12-06 17:02:02.466175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.466191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.473897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7538 00:35:13.853 [2024-12-06 17:02:02.474587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.474603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.482300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:13.853 [2024-12-06 17:02:02.482996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.483012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.490700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef35f0 00:35:13.853 [2024-12-06 17:02:02.491419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.491435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.499089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6300 00:35:13.853 [2024-12-06 17:02:02.499779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.499795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.507466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eef270 00:35:13.853 [2024-12-06 17:02:02.508146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.508162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.515883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef1430 00:35:13.853 [2024-12-06 17:02:02.516577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.516592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.524291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef2d80 00:35:13.853 [2024-12-06 17:02:02.524980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.524996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.532698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eebfd0 00:35:13.853 [2024-12-06 17:02:02.533385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.533401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:13.853 [2024-12-06 17:02:02.541091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7da8 00:35:13.853 [2024-12-06 17:02:02.541747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:13.853 [2024-12-06 17:02:02.541763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.549482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5be8 00:35:14.114 [2024-12-06 17:02:02.550176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.550195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.557861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeaef0 00:35:14.114 [2024-12-06 17:02:02.558551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.558567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.566256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8e88 00:35:14.114 [2024-12-06 17:02:02.566949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.566964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.574650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eec840 00:35:14.114 [2024-12-06 17:02:02.575348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.575364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.583044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeea00 00:35:14.114 [2024-12-06 17:02:02.583699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.583715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.591435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0bc0 00:35:14.114 [2024-12-06 17:02:02.592143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.592159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.599814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb048 00:35:14.114 [2024-12-06 17:02:02.600505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.600522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.608191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeb760 00:35:14.114 [2024-12-06 17:02:02.608881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.608896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.616591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7538 00:35:14.114 [2024-12-06 17:02:02.617268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.617284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.624993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:14.114 [2024-12-06 17:02:02.625690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.625706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.633407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef35f0 00:35:14.114 [2024-12-06 17:02:02.634098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.634116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.641784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6300 00:35:14.114 [2024-12-06 17:02:02.642470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.642486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.650173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eef270 00:35:14.114 [2024-12-06 17:02:02.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.650881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.659585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef1430 00:35:14.114 [2024-12-06 17:02:02.660733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.660748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.667799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:14.114 [2024-12-06 17:02:02.668651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.668667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.676285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efe2e8 00:35:14.114 [2024-12-06 17:02:02.677113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.677128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.686038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ede470 00:35:14.114 [2024-12-06 17:02:02.687301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.693093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee3060 00:35:14.114 [2024-12-06 17:02:02.693822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.693838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.702874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef8a50 00:35:14.114 [2024-12-06 17:02:02.704161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.704177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.709955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef81e0 00:35:14.114 [2024-12-06 17:02:02.710503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.710520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.718583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0ff8 00:35:14.114 [2024-12-06 17:02:02.719376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.719392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.726889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eee190 00:35:14.114 [2024-12-06 17:02:02.727619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.727635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.735323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef96f8 00:35:14.114 [2024-12-06 17:02:02.736137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.736152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.114 [2024-12-06 17:02:02.743708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7100 00:35:14.114 [2024-12-06 17:02:02.744505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.114 [2024-12-06 17:02:02.744521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.115 [2024-12-06 17:02:02.752112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee12d8 00:35:14.115 [2024-12-06 17:02:02.752900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.115 [2024-12-06 17:02:02.752916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.115 [2024-12-06 17:02:02.760516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016edf988 00:35:14.115 [2024-12-06 17:02:02.761303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.115 [2024-12-06 17:02:02.761319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.115 [2024-12-06 17:02:02.768932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee4578 00:35:14.115 [2024-12-06 17:02:02.769720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.115 [2024-12-06 17:02:02.769742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.115 [2024-12-06 17:02:02.777336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef6458 00:35:14.115 [2024-12-06 17:02:02.778146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.115 [2024-12-06 17:02:02.778162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.115 [2024-12-06 17:02:02.785726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeff18 00:35:14.115 [2024-12-06 17:02:02.786515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.115 [2024-12-06 17:02:02.786531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.115 [2024-12-06 17:02:02.794123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efbcf0 00:35:14.115 [2024-12-06 17:02:02.794909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.115 [2024-12-06 17:02:02.794924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.115 [2024-12-06 17:02:02.802529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6300 00:35:14.115 [2024-12-06 17:02:02.803321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.115 [2024-12-06 17:02:02.803336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.810937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efa7d8 00:35:14.376 [2024-12-06 17:02:02.811730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.811746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.819343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef81e0 00:35:14.376 [2024-12-06 17:02:02.820150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.820165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.827727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee23b8 00:35:14.376 [2024-12-06 17:02:02.828478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.828494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.836131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ede8a8 00:35:14.376 [2024-12-06 17:02:02.836923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.836938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.844536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee27f0 00:35:14.376 [2024-12-06 17:02:02.845312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.845328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.852939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeee38 00:35:14.376 [2024-12-06 17:02:02.853725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.853741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.861349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef0ff8 00:35:14.376 [2024-12-06 17:02:02.862140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.862155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.869773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eee190 00:35:14.376 [2024-12-06 17:02:02.870563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.870579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.878179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef96f8 00:35:14.376 [2024-12-06 17:02:02.878965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.878981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.886577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7100 00:35:14.376 [2024-12-06 17:02:02.887363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.887379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.894985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee12d8 00:35:14.376 [2024-12-06 17:02:02.895782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.895798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.903788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeff18 00:35:14.376 [2024-12-06 17:02:02.904858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.904874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.914294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeaef0 00:35:14.376 [2024-12-06 17:02:02.915811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.915827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.920261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee0a68 00:35:14.376 [2024-12-06 17:02:02.920966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.920981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.930153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eefae0 00:35:14.376 [2024-12-06 17:02:02.931216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.931232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.938451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eed920 00:35:14.376 [2024-12-06 17:02:02.939480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.939495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.946854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7da8 00:35:14.376 [2024-12-06 17:02:02.947883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.947898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.955269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eebfd0 00:35:14.376 [2024-12-06 17:02:02.956344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.956360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.963657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee5ec8 00:35:14.376 [2024-12-06 17:02:02.964731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.964747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.972056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeb328 00:35:14.376 [2024-12-06 17:02:02.973149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.973165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.980457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee3060 00:35:14.376 [2024-12-06 17:02:02.981526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.981542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.988871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eecc78 00:35:14.376 [2024-12-06 17:02:02.989932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.989951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:02.997281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efc128 00:35:14.376 [2024-12-06 17:02:02.998339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:02.998355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:03.005690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eeea00 00:35:14.376 [2024-12-06 17:02:03.006759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.376 [2024-12-06 17:02:03.006774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.376 [2024-12-06 17:02:03.014083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eec840 00:35:14.376 [2024-12-06 17:02:03.015156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.377 [2024-12-06 17:02:03.015173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.377 [2024-12-06 17:02:03.022510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eed0b0 00:35:14.377 [2024-12-06 17:02:03.023579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.377 [2024-12-06 17:02:03.023595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.377 [2024-12-06 17:02:03.030918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016edf550 00:35:14.377 [2024-12-06 17:02:03.031985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.377 [2024-12-06 17:02:03.032001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.377 [2024-12-06 17:02:03.039348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016eea248 00:35:14.377 [2024-12-06 17:02:03.040410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.377 [2024-12-06 17:02:03.040425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.377 [2024-12-06 17:02:03.047745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efb048 00:35:14.377 [2024-12-06 17:02:03.048796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.377 [2024-12-06 17:02:03.048812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:14.377 [2024-12-06 17:02:03.055532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef46d0 00:35:14.377 [2024-12-06 17:02:03.056774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.377 [2024-12-06 17:02:03.056790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:14.377 [2024-12-06 17:02:03.063287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef96f8 00:35:14.377 [2024-12-06 17:02:03.063988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.377 [2024-12-06 17:02:03.064004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:14.637 [2024-12-06 17:02:03.071998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef5378 00:35:14.638 [2024-12-06 17:02:03.072932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.072949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.082384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efc560 00:35:14.638 [2024-12-06 17:02:03.083815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.083830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.088723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee9e10 00:35:14.638 [2024-12-06 17:02:03.089446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.089463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.098502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016efda78 00:35:14.638 [2024-12-06 17:02:03.099446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.099461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.106914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee0ea0 00:35:14.638 [2024-12-06 17:02:03.107894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.107909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.115970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6b70 00:35:14.638 [2024-12-06 17:02:03.117082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.117098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.124315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef46d0 00:35:14.638 [2024-12-06 17:02:03.125403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.125420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.131647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef2d80 00:35:14.638 [2024-12-06 17:02:03.132467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.132482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.140003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef3e60 00:35:14.638 [2024-12-06 17:02:03.140786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.140801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.148576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef7da8 00:35:14.638 [2024-12-06 17:02:03.149362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.149377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.156985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee5658 00:35:14.638 [2024-12-06 17:02:03.157768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.157784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.165398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee49b0 00:35:14.638 [2024-12-06 17:02:03.166214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.166230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.173812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef46d0 00:35:14.638 [2024-12-06 17:02:03.174596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.174612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.182229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ef3e60 00:35:14.638 [2024-12-06 17:02:03.183006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.183022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:14.638 [2024-12-06 17:02:03.191145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165a70) with pdu=0x200016ee6b70 00:35:14.638 30293.50 IOPS, 118.33 MiB/s [2024-12-06T16:02:03.331Z] [2024-12-06 17:02:03.191874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:14.638 [2024-12-06 17:02:03.191887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:14.638 00:35:14.638 Latency(us) 00:35:14.638 [2024-12-06T16:02:03.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.638 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:14.638 nvme0n1 : 2.00 30312.21 118.41 0.00 0.00 4218.64 2061.65 10649.60 00:35:14.638 [2024-12-06T16:02:03.331Z] =================================================================================================================== 00:35:14.638 [2024-12-06T16:02:03.331Z] Total : 30312.21 118.41 0.00 0.00 4218.64 2061.65 10649.60 00:35:14.638 { 00:35:14.638 "results": [ 00:35:14.638 { 00:35:14.638 "job": "nvme0n1", 00:35:14.638 "core_mask": "0x2", 00:35:14.638 "workload": "randwrite", 00:35:14.638 "status": "finished", 00:35:14.638 "queue_depth": 128, 00:35:14.638 "io_size": 4096, 00:35:14.638 "runtime": 2.002988, 00:35:14.638 "iops": 30312.21355295189, 00:35:14.638 "mibps": 118.40708419121832, 00:35:14.638 "io_failed": 0, 00:35:14.638 "io_timeout": 0, 00:35:14.638 "avg_latency_us": 4218.642756046007, 00:35:14.638 "min_latency_us": 2061.653333333333, 00:35:14.638 "max_latency_us": 10649.6 00:35:14.638 } 00:35:14.638 ], 00:35:14.638 "core_count": 1 00:35:14.638 } 00:35:14.638 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:14.638 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:14.638 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:14.638 | .driver_specific 00:35:14.638 | .nvme_error 00:35:14.638 | .status_code 00:35:14.638 | .command_transient_transport_error' 00:35:14.638 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 238 > 0 )) 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2501734 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2501734 ']' 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2501734 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2501734 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2501734' 00:35:14.899 killing process with pid 2501734 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2501734 00:35:14.899 Received shutdown signal, test time was about 2.000000 seconds 00:35:14.899 00:35:14.899 Latency(us) 00:35:14.899 [2024-12-06T16:02:03.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.899 [2024-12-06T16:02:03.592Z] =================================================================================================================== 00:35:14.899 [2024-12-06T16:02:03.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2501734 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2502409 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2502409 /var/tmp/bperf.sock 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2502409 ']' 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:14.899 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:14.899 [2024-12-06 17:02:03.542380] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:14.899 [2024-12-06 17:02:03.542434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2502409 ] 00:35:14.899 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:14.899 Zero copy mechanism will not be used. 00:35:15.159 [2024-12-06 17:02:03.606701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.159 [2024-12-06 17:02:03.621262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.159 17:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:15.729 nvme0n1 00:35:15.729 17:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:15.729 17:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.729 17:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:15.729 17:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.729 17:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:15.729 17:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:15.729 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:15.729 Zero copy mechanism will not be used. 00:35:15.729 Running I/O for 2 seconds... 00:35:15.729 [2024-12-06 17:02:04.315985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.316060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.316094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.321844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.322035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.322056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.330388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.330662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.330680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.335350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.335521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.335539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.340406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.340610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.340626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.350310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.350494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.350510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.360311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.360540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.360557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.370990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.371233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.371249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.381562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.729 [2024-12-06 17:02:04.381811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.729 [2024-12-06 17:02:04.381829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.729 [2024-12-06 17:02:04.392367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.730 [2024-12-06 17:02:04.392543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.730 [2024-12-06 17:02:04.392559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.730 [2024-12-06 17:02:04.402022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.730 [2024-12-06 17:02:04.402193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.730 [2024-12-06 17:02:04.402210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.730 [2024-12-06 17:02:04.411260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.730 [2024-12-06 17:02:04.411446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.730 [2024-12-06 17:02:04.411462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.730 [2024-12-06 17:02:04.420794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.730 [2024-12-06 17:02:04.421175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.730 [2024-12-06 17:02:04.421191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.430394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.430603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.430620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.434840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.435022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.435038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.437413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.437555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.437570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.439815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.439946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.439962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.442391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.442517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.442533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.445741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.445939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.445954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.451349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.451492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.451509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.455722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.456017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.456033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.463889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.464064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.464081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.468028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.468157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.468174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.470468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.470593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.470610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.988 [2024-12-06 17:02:04.472883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.988 [2024-12-06 17:02:04.473008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.988 [2024-12-06 17:02:04.473025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.475295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.475418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.475435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.477768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.477894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.477914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.481315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.481456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.481472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.488058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.488195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.488212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.496599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.496737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.496753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.505250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.505445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.505461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.514318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.514474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.514490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.523462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.523613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.523629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.531388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.531651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.531669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.539374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.539415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.539430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.545906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.546031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.546047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.553525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.553726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.553742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.561233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.561413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.561429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.568763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.568952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.568968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.577281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.577478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.577494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.585126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.585277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.585293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.593084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.593270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.593286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.600735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.600943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.600959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.608492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.608667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.608683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.614662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.614832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.614848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.622196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.622381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.622397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.630430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.630772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.630788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.638051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.638264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.638280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.644988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.645118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.645134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.653746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.653942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.653957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.662826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.662988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.663005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.671137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.989 [2024-12-06 17:02:04.671336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.989 [2024-12-06 17:02:04.671352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:15.989 [2024-12-06 17:02:04.679203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:15.990 [2024-12-06 17:02:04.679367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:15.990 [2024-12-06 17:02:04.679386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.687712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.687939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.687955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.695419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.695667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.695683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.703277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.703458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.703474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.711505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.711686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.711702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.717587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.717638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.717654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.725176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.725343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.725359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.734737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.734868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.734884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.743807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.744038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.744054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.753939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.754160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.754177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.763208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.763475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.763491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.772808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.773062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.773078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.781972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.782151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.782167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.791595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.791704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.791720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.801238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.801388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.801403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.810347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.810517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.810533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.819019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.819206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.819223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.828393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.828589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.828605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.837350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.837490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.837506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.846146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.846327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.846343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.852603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.852649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.852664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.855503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.855589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.855605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.859359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.248 [2024-12-06 17:02:04.859527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.248 [2024-12-06 17:02:04.859543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.248 [2024-12-06 17:02:04.866928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.867116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.867132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.872304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.872355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.872371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.879743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.879925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.879942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.888365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.888569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.888587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.897441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.897617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.897632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.905868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.906093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.906113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.914574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.914843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.914858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.923364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.923619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.923635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.930955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.931127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.931142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.249 [2024-12-06 17:02:04.939763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.249 [2024-12-06 17:02:04.939929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.249 [2024-12-06 17:02:04.939945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:04.947930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:04.948208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:04.948224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:04.957288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:04.957475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:04.957491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:04.966922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:04.967107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:04.967122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:04.976318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:04.976505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:04.976521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:04.986514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:04.986669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:04.986684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:04.995926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:04.996152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:04.996168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.003869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.004153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.004169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.012464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.012663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.012679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.020558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.020716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.020731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.028597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.028857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.028873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.036740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.036932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.036948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.044665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.044863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.044879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.053735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.053932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.053948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.062703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.062864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.062880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.069875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.069914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.069929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.074029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.074077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.074093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.080809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.081016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.081032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.088412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.088597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.088613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.096369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.096592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.509 [2024-12-06 17:02:05.096607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.509 [2024-12-06 17:02:05.104191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.509 [2024-12-06 17:02:05.104234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.104253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.111851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.112115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.112131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.119758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.119872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.119888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.128082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.128349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.128365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.135940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.136149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.136165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.142859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.143043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.143059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.150895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.151201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.151217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.158941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.159108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.159124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.167666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.167977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.167993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.175161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.175311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.175326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.182053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.182248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.182265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.189990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.190263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.190278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.510 [2024-12-06 17:02:05.199047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.510 [2024-12-06 17:02:05.199248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.510 [2024-12-06 17:02:05.199265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.208135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.208329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.208345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.216679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.216862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.216878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.224965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.225154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.225170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.234000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.234325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.234341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.242017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.242112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.242128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.249432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.249637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.249653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.257293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.257429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.257444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.265259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.265509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.265525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.273861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.274060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.274075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.282273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.282529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.282544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.290765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.290938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.290955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.299989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.300232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.769 [2024-12-06 17:02:05.300248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.769 [2024-12-06 17:02:05.308141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.769 [2024-12-06 17:02:05.309179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.309195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.770 3998.00 IOPS, 499.75 MiB/s [2024-12-06T16:02:05.463Z] [2024-12-06 17:02:05.316807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.317018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.317037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.325302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.325490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.325506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.335399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.335575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.335590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.345300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.345456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.345472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.354801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.355069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.355086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.363679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.363868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.363884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.372460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.372621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.372637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.381696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.381832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.381848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.391077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.391261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.391278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.400361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.400542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.400558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.409542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.409667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.409683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.418113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.418235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.418251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.427059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.427301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.427317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.434939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.435173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.435189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.443605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.443783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.443800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.451853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.452084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.452106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:16.770 [2024-12-06 17:02:05.459984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:16.770 [2024-12-06 17:02:05.460148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:16.770 [2024-12-06 17:02:05.460164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.467734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.467923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.467939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.476452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.476678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.476694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.484510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.484720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.484737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.492083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.492320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.492336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.500335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.500498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.500514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.508734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.508906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.508925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.515919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.516112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.516128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.523756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.523944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.523961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.531259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.531486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.531501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.539587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.539768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.539787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.547235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.547426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.547442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.555364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.555536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.555553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.563936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.564132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.564148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.572663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.030 [2024-12-06 17:02:05.572826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.030 [2024-12-06 17:02:05.572840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.030 [2024-12-06 17:02:05.580755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.580981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.580998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.589571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.589749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.589764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.597382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.597490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.597505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.605932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.606083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.613573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.613765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.613782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.621148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.621319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.621336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.629327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.629566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.629582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.637252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.637504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.637521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.645399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.645600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.645617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.651740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.651852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.651868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.658577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.658867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.658884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.666058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.666177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.666193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.675386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.675634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.675651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.684152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.684400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.684416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.694095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.694313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.694329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.702043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.702189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.702205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.710996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.711186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.711203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.031 [2024-12-06 17:02:05.719157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.031 [2024-12-06 17:02:05.719364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.031 [2024-12-06 17:02:05.719380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.728830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.729020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.729036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.738610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.738799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.738815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.748508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.748670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.748690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.758013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.758169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.758192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.767359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.767527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.767543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.776495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.776671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.776688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.786078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.786334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.786351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.795507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.795680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.795696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.306 [2024-12-06 17:02:05.804812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.306 [2024-12-06 17:02:05.804981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.306 [2024-12-06 17:02:05.804997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.814195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.814333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.814349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.823463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.823644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.823660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.833058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.833261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.833276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.843123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.843284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.843299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.852707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.852896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.852913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.862364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.862588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.862604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.872081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.872245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.881120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.881522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.881539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.891057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.891285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.891302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.901039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.901211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.901227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.909052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.909262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.909277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.917553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.917783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.917799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.924409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.924523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.924539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.931570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.931790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.931807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.939832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.939964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.939980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.947970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.948159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.948176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.955536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.955721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.955737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.962517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.962626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.962642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.964920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.965036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.965052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.967477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.967601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.967617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.970866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.971309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.971329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.980390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.980612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.980628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.307 [2024-12-06 17:02:05.989780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.307 [2024-12-06 17:02:05.989935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.307 [2024-12-06 17:02:05.989951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:05.999594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:05.999779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:05.999795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.008301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.008504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.008519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.017473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.017666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.017682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.025806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.026027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.026042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.034974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.035174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.035190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.042778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.042884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.042901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.045180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.045278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.045294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.047608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.047713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.047729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.050205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.050299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.050315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.053626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.053719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.053734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.058963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.059151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.059168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.063899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.063993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.064009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.068487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.068580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.068596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.073110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.073202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.073217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.078643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.078737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.078753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.082429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.082520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.082536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.086163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.086254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.086270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.091571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.091674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.091690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.096013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.096095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.096115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.103213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.103305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.103320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.108541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.108632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.108648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.113198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.568 [2024-12-06 17:02:06.113431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.568 [2024-12-06 17:02:06.113448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.568 [2024-12-06 17:02:06.119467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.119659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.119676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.127364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.127542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.127560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.137084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.137336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.137352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.145269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.145353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.145369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.152450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.152645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.152661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.161930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.162086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.162108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.171036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.171141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.171157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.177571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.177751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.177767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.185319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.185483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.185499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.193880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.194124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.194140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.201734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.201821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.201838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.206013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.206096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.206117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.211151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.211239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.211254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.213885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.213970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.213986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.216286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.216381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.216397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.218787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.218884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.218900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.221231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.221312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.221328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.223696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.223792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.223808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.226122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.226218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.226235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.230043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.230151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.230167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.237189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.237358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.237374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.240792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.240889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.240905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.243240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.243335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.243351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.245809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.245913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.245929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.250773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.250980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.250997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.569 [2024-12-06 17:02:06.258244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.569 [2024-12-06 17:02:06.258339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.569 [2024-12-06 17:02:06.258354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.267124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.267283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.267299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.273365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.273460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.273479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.280729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.280822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.280837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.284403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.284496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.284511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.289278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.289371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.289386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.296978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.297073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.297088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.303721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.303814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.306166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.306272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.306289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:17.829 [2024-12-06 17:02:06.308576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.308677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.308694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:17.829 4149.00 IOPS, 518.62 MiB/s [2024-12-06T16:02:06.522Z] [2024-12-06 17:02:06.312058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1165f50) with pdu=0x200016eff3c8 00:35:17.829 [2024-12-06 17:02:06.312105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:17.829 [2024-12-06 17:02:06.312121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:17.830 00:35:17.830 Latency(us) 00:35:17.830 [2024-12-06T16:02:06.523Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:17.830 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:17.830 nvme0n1 : 2.00 4153.83 519.23 0.00 0.00 3848.15 1058.13 13380.27 00:35:17.830 [2024-12-06T16:02:06.523Z] =================================================================================================================== 00:35:17.830 [2024-12-06T16:02:06.523Z] Total : 4153.83 519.23 0.00 0.00 3848.15 1058.13 13380.27 00:35:17.830 { 00:35:17.830 "results": [ 00:35:17.830 { 00:35:17.830 "job": "nvme0n1", 00:35:17.830 "core_mask": "0x2", 00:35:17.830 "workload": "randwrite", 00:35:17.830 "status": "finished", 00:35:17.830 "queue_depth": 16, 00:35:17.830 "io_size": 131072, 00:35:17.830 "runtime": 2.00273, 00:35:17.830 "iops": 4153.830022019943, 00:35:17.830 "mibps": 519.2287527524928, 00:35:17.830 "io_failed": 0, 00:35:17.830 "io_timeout": 0, 00:35:17.830 "avg_latency_us": 3848.1503129382545, 00:35:17.830 "min_latency_us": 1058.1333333333334, 00:35:17.830 "max_latency_us": 13380.266666666666 00:35:17.830 } 00:35:17.830 ], 00:35:17.830 "core_count": 1 00:35:17.830 } 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:17.830 | .driver_specific 00:35:17.830 | .nvme_error 00:35:17.830 | .status_code 00:35:17.830 | .command_transient_transport_error' 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 269 > 0 )) 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2502409 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2502409 ']' 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2502409 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.830 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2502409 00:35:18.089 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:18.089 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:18.089 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2502409' 00:35:18.089 killing process with pid 2502409 00:35:18.089 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2502409 00:35:18.089 Received shutdown signal, test time was about 2.000000 seconds 00:35:18.089 00:35:18.089 Latency(us) 00:35:18.089 [2024-12-06T16:02:06.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.090 [2024-12-06T16:02:06.783Z] =================================================================================================================== 00:35:18.090 [2024-12-06T16:02:06.783Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2502409 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2500355 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2500355 ']' 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2500355 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2500355 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2500355' 00:35:18.090 killing process with pid 2500355 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2500355 00:35:18.090 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2500355 00:35:18.349 00:35:18.349 real 0m12.599s 00:35:18.349 user 0m25.007s 00:35:18.349 sys 0m2.836s 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:18.350 ************************************ 00:35:18.350 END TEST nvmf_digest_error 00:35:18.350 ************************************ 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:18.350 rmmod nvme_tcp 00:35:18.350 rmmod nvme_fabrics 00:35:18.350 rmmod nvme_keyring 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2500355 ']' 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2500355 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2500355 ']' 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2500355 00:35:18.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2500355) - No such process 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2500355 is not found' 00:35:18.350 Process with pid 2500355 is not found 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:18.350 17:02:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:20.255 00:35:20.255 real 0m32.945s 00:35:20.255 user 0m51.589s 00:35:20.255 sys 0m9.786s 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:20.255 ************************************ 00:35:20.255 END TEST nvmf_digest 00:35:20.255 ************************************ 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.255 17:02:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.516 ************************************ 00:35:20.516 START TEST nvmf_bdevperf 00:35:20.516 ************************************ 00:35:20.516 17:02:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:20.516 * Looking for test storage... 00:35:20.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.516 --rc genhtml_branch_coverage=1 00:35:20.516 --rc genhtml_function_coverage=1 00:35:20.516 --rc genhtml_legend=1 00:35:20.516 --rc geninfo_all_blocks=1 00:35:20.516 --rc geninfo_unexecuted_blocks=1 00:35:20.516 00:35:20.516 ' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.516 --rc genhtml_branch_coverage=1 00:35:20.516 --rc genhtml_function_coverage=1 00:35:20.516 --rc genhtml_legend=1 00:35:20.516 --rc geninfo_all_blocks=1 00:35:20.516 --rc geninfo_unexecuted_blocks=1 00:35:20.516 00:35:20.516 ' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.516 --rc genhtml_branch_coverage=1 00:35:20.516 --rc genhtml_function_coverage=1 00:35:20.516 --rc genhtml_legend=1 00:35:20.516 --rc geninfo_all_blocks=1 00:35:20.516 --rc geninfo_unexecuted_blocks=1 00:35:20.516 00:35:20.516 ' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:20.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:20.516 --rc genhtml_branch_coverage=1 00:35:20.516 --rc genhtml_function_coverage=1 00:35:20.516 --rc genhtml_legend=1 00:35:20.516 --rc geninfo_all_blocks=1 00:35:20.516 --rc geninfo_unexecuted_blocks=1 00:35:20.516 00:35:20.516 ' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.516 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:20.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:20.517 17:02:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:25.795 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:25.795 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:25.795 Found net devices under 0000:31:00.0: cvl_0_0 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:25.795 Found net devices under 0000:31:00.1: cvl_0_1 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.558 ms 00:35:25.795 00:35:25.795 --- 10.0.0.2 ping statistics --- 00:35:25.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.795 rtt min/avg/max/mdev = 0.558/0.558/0.558/0.000 ms 00:35:25.795 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:35:25.796 00:35:25.796 --- 10.0.0.1 ping statistics --- 00:35:25.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.796 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2507431 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2507431 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2507431 ']' 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:25.796 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:25.796 [2024-12-06 17:02:14.405718] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:25.796 [2024-12-06 17:02:14.405766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.796 [2024-12-06 17:02:14.476207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:26.056 [2024-12-06 17:02:14.492337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.056 [2024-12-06 17:02:14.492365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.056 [2024-12-06 17:02:14.492372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.056 [2024-12-06 17:02:14.492377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.056 [2024-12-06 17:02:14.492382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.056 [2024-12-06 17:02:14.493666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.056 [2024-12-06 17:02:14.493705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.056 [2024-12-06 17:02:14.493708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.056 [2024-12-06 17:02:14.591641] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.056 Malloc0 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.056 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:26.056 [2024-12-06 17:02:14.638904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:26.057 { 00:35:26.057 "params": { 00:35:26.057 "name": "Nvme$subsystem", 00:35:26.057 "trtype": "$TEST_TRANSPORT", 00:35:26.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:26.057 "adrfam": "ipv4", 00:35:26.057 "trsvcid": "$NVMF_PORT", 00:35:26.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:26.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:26.057 "hdgst": ${hdgst:-false}, 00:35:26.057 "ddgst": ${ddgst:-false} 00:35:26.057 }, 00:35:26.057 "method": "bdev_nvme_attach_controller" 00:35:26.057 } 00:35:26.057 EOF 00:35:26.057 )") 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:26.057 17:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:26.057 "params": { 00:35:26.057 "name": "Nvme1", 00:35:26.057 "trtype": "tcp", 00:35:26.057 "traddr": "10.0.0.2", 00:35:26.057 "adrfam": "ipv4", 00:35:26.057 "trsvcid": "4420", 00:35:26.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:26.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:26.057 "hdgst": false, 00:35:26.057 "ddgst": false 00:35:26.057 }, 00:35:26.057 "method": "bdev_nvme_attach_controller" 00:35:26.057 }' 00:35:26.057 [2024-12-06 17:02:14.675908] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:26.057 [2024-12-06 17:02:14.675955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507453 ] 00:35:26.057 [2024-12-06 17:02:14.739056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.316 [2024-12-06 17:02:14.755562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.316 Running I/O for 1 seconds... 00:35:27.268 12999.00 IOPS, 50.78 MiB/s 00:35:27.268 Latency(us) 00:35:27.268 [2024-12-06T16:02:15.961Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:27.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:27.268 Verification LBA range: start 0x0 length 0x4000 00:35:27.268 Nvme1n1 : 1.00 13067.74 51.05 0.00 0.00 9750.25 901.12 12724.91 00:35:27.268 [2024-12-06T16:02:15.961Z] =================================================================================================================== 00:35:27.268 [2024-12-06T16:02:15.961Z] Total : 13067.74 51.05 0.00 0.00 9750.25 901.12 12724.91 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2507784 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:27.528 { 00:35:27.528 "params": { 00:35:27.528 "name": "Nvme$subsystem", 00:35:27.528 "trtype": "$TEST_TRANSPORT", 00:35:27.528 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.528 "adrfam": "ipv4", 00:35:27.528 "trsvcid": "$NVMF_PORT", 00:35:27.528 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.528 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.528 "hdgst": ${hdgst:-false}, 00:35:27.528 "ddgst": ${ddgst:-false} 00:35:27.528 }, 00:35:27.528 "method": "bdev_nvme_attach_controller" 00:35:27.528 } 00:35:27.528 EOF 00:35:27.528 )") 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:27.528 17:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:27.528 "params": { 00:35:27.528 "name": "Nvme1", 00:35:27.528 "trtype": "tcp", 00:35:27.528 "traddr": "10.0.0.2", 00:35:27.528 "adrfam": "ipv4", 00:35:27.528 "trsvcid": "4420", 00:35:27.528 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.528 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.528 "hdgst": false, 00:35:27.528 "ddgst": false 00:35:27.528 }, 00:35:27.528 "method": "bdev_nvme_attach_controller" 00:35:27.528 }' 00:35:27.528 [2024-12-06 17:02:16.068799] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:27.528 [2024-12-06 17:02:16.068854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2507784 ] 00:35:27.528 [2024-12-06 17:02:16.133196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.528 [2024-12-06 17:02:16.148128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.786 Running I/O for 15 seconds... 00:35:30.101 12711.00 IOPS, 49.65 MiB/s [2024-12-06T16:02:19.057Z] 12828.00 IOPS, 50.11 MiB/s [2024-12-06T16:02:19.057Z] 17:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2507431 00:35:30.364 17:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:30.364 [2024-12-06 17:02:19.051070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.364 [2024-12-06 17:02:19.051375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.364 [2024-12-06 17:02:19.051383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.365 [2024-12-06 17:02:19.051863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.365 [2024-12-06 17:02:19.051868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.051991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.051996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.052007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.052018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.052030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.366 [2024-12-06 17:02:19.052223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.366 [2024-12-06 17:02:19.052329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.366 [2024-12-06 17:02:19.052336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.367 [2024-12-06 17:02:19.052668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.367 [2024-12-06 17:02:19.052674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf94e0 is same with the state(6) to be set 00:35:30.367 [2024-12-06 17:02:19.052681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:30.367 [2024-12-06 17:02:19.052685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:30.367 [2024-12-06 17:02:19.052690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:8 PRP1 0x0 PRP2 0x0 00:35:30.367 [2024-12-06 17:02:19.052697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.631 [2024-12-06 17:02:19.055190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.631 [2024-12-06 17:02:19.055237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.631 [2024-12-06 17:02:19.055863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.631 [2024-12-06 17:02:19.055876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.631 [2024-12-06 17:02:19.055883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.631 [2024-12-06 17:02:19.056036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.631 [2024-12-06 17:02:19.056199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.631 [2024-12-06 17:02:19.056208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.631 [2024-12-06 17:02:19.056215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.631 [2024-12-06 17:02:19.056221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.631 [2024-12-06 17:02:19.068099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.631 [2024-12-06 17:02:19.068567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.631 [2024-12-06 17:02:19.068581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.631 [2024-12-06 17:02:19.068587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.631 [2024-12-06 17:02:19.068741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.631 [2024-12-06 17:02:19.068893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.631 [2024-12-06 17:02:19.068899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.631 [2024-12-06 17:02:19.068905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.631 [2024-12-06 17:02:19.068910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.631 [2024-12-06 17:02:19.080795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.631 [2024-12-06 17:02:19.081363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.631 [2024-12-06 17:02:19.081396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.631 [2024-12-06 17:02:19.081404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.631 [2024-12-06 17:02:19.081575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.631 [2024-12-06 17:02:19.081730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.631 [2024-12-06 17:02:19.081737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.631 [2024-12-06 17:02:19.081742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.631 [2024-12-06 17:02:19.081748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.631 [2024-12-06 17:02:19.093489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.631 [2024-12-06 17:02:19.094071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.631 [2024-12-06 17:02:19.094116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.631 [2024-12-06 17:02:19.094125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.631 [2024-12-06 17:02:19.094293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.631 [2024-12-06 17:02:19.094448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.631 [2024-12-06 17:02:19.094454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.094460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.094465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.106200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.106726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.106756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.106765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.106933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.107089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.107095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.107110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.107117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.118846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.119077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.119093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.119105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.119261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.119413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.119419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.119424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.119429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.131588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.132160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.132192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.132200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.132372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.132527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.132534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.132539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.132545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.144287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.144858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.144888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.144897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.145065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.145228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.145235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.145241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.145246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.156964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.157556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.157587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.157596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.157764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.157919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.157925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.157930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.157936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.169661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.170225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.170256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.170264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.170435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.170590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.170596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.170605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.170611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.182349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.182911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.182942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.182951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.183126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.183282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.183288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.183294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.183299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.195020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.195618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.195649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.195658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.195826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.195981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.195987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.195993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.195998] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.207899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.208497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.208527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.208536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.208704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.208859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.632 [2024-12-06 17:02:19.208865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.632 [2024-12-06 17:02:19.208871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.632 [2024-12-06 17:02:19.208877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.632 [2024-12-06 17:02:19.220613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.632 [2024-12-06 17:02:19.221200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.632 [2024-12-06 17:02:19.221231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.632 [2024-12-06 17:02:19.221240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.632 [2024-12-06 17:02:19.221411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.632 [2024-12-06 17:02:19.221566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.221573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.221578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.221584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.633 [2024-12-06 17:02:19.233308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.633 [2024-12-06 17:02:19.233882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.633 [2024-12-06 17:02:19.233912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.633 [2024-12-06 17:02:19.233921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.633 [2024-12-06 17:02:19.234092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.633 [2024-12-06 17:02:19.234254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.234261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.234267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.234273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.633 [2024-12-06 17:02:19.246001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.633 [2024-12-06 17:02:19.246593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.633 [2024-12-06 17:02:19.246625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.633 [2024-12-06 17:02:19.246633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.633 [2024-12-06 17:02:19.246801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.633 [2024-12-06 17:02:19.246957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.246963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.246969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.246975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.633 [2024-12-06 17:02:19.258710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.633 [2024-12-06 17:02:19.259188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.633 [2024-12-06 17:02:19.259219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.633 [2024-12-06 17:02:19.259231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.633 [2024-12-06 17:02:19.259402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.633 [2024-12-06 17:02:19.259557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.259563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.259569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.259574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.633 [2024-12-06 17:02:19.271443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.633 [2024-12-06 17:02:19.271930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.633 [2024-12-06 17:02:19.271945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.633 [2024-12-06 17:02:19.271950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.633 [2024-12-06 17:02:19.272107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.633 [2024-12-06 17:02:19.272260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.272266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.272271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.272275] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.633 [2024-12-06 17:02:19.284148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.633 [2024-12-06 17:02:19.284629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.633 [2024-12-06 17:02:19.284642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.633 [2024-12-06 17:02:19.284648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.633 [2024-12-06 17:02:19.284800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.633 [2024-12-06 17:02:19.284951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.284957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.284962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.284967] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.633 [2024-12-06 17:02:19.296830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.633 [2024-12-06 17:02:19.297408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.633 [2024-12-06 17:02:19.297438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.633 [2024-12-06 17:02:19.297447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.633 [2024-12-06 17:02:19.297616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.633 [2024-12-06 17:02:19.297774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.297781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.297786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.297792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.633 [2024-12-06 17:02:19.309522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.633 [2024-12-06 17:02:19.310077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.633 [2024-12-06 17:02:19.310114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.633 [2024-12-06 17:02:19.310123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.633 [2024-12-06 17:02:19.310291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.633 [2024-12-06 17:02:19.310447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.633 [2024-12-06 17:02:19.310453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.633 [2024-12-06 17:02:19.310459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.633 [2024-12-06 17:02:19.310465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.896 [2024-12-06 17:02:19.322180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.896 [2024-12-06 17:02:19.322543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.896 [2024-12-06 17:02:19.322559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.896 [2024-12-06 17:02:19.322565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.896 [2024-12-06 17:02:19.322717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.896 [2024-12-06 17:02:19.322870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.896 [2024-12-06 17:02:19.322877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.896 [2024-12-06 17:02:19.322881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.896 [2024-12-06 17:02:19.322887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.896 [2024-12-06 17:02:19.334891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.896 [2024-12-06 17:02:19.335473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.896 [2024-12-06 17:02:19.335504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.896 [2024-12-06 17:02:19.335513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.896 [2024-12-06 17:02:19.335681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.896 [2024-12-06 17:02:19.335836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.896 [2024-12-06 17:02:19.335842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.896 [2024-12-06 17:02:19.335851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.896 [2024-12-06 17:02:19.335857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.896 [2024-12-06 17:02:19.347592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.896 [2024-12-06 17:02:19.348183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.896 [2024-12-06 17:02:19.348213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.896 [2024-12-06 17:02:19.348222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.896 [2024-12-06 17:02:19.348393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.896 [2024-12-06 17:02:19.348548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.896 [2024-12-06 17:02:19.348555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.896 [2024-12-06 17:02:19.348561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.896 [2024-12-06 17:02:19.348567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.896 [2024-12-06 17:02:19.360301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.896 [2024-12-06 17:02:19.360871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.896 [2024-12-06 17:02:19.360901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.896 [2024-12-06 17:02:19.360910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.896 [2024-12-06 17:02:19.361079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.896 [2024-12-06 17:02:19.361241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.896 [2024-12-06 17:02:19.361248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.896 [2024-12-06 17:02:19.361254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.896 [2024-12-06 17:02:19.361259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.896 [2024-12-06 17:02:19.372984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.896 [2024-12-06 17:02:19.373608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.896 [2024-12-06 17:02:19.373639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.896 [2024-12-06 17:02:19.373647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.896 [2024-12-06 17:02:19.373815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.896 [2024-12-06 17:02:19.373970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.896 [2024-12-06 17:02:19.373977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.896 [2024-12-06 17:02:19.373982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.896 [2024-12-06 17:02:19.373988] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.896 [2024-12-06 17:02:19.385721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.896 [2024-12-06 17:02:19.386224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.896 [2024-12-06 17:02:19.386255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.386265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.386435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.386590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.386597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.386603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.386609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.398475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.398963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.398978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.398984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.399140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.399294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.399300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.399305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.399310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.411175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.411657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.411671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.411676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.411828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.411980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.411986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.411991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.411995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.423866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.424440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.424471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.424483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.424651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.424806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.424812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.424818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.424824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.436548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.437131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.437163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.437171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.437339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.437494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.437500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.437506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.437511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.449240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.449811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.449843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.449851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.450019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.450181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.450188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.450194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.450199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 11103.67 IOPS, 43.37 MiB/s [2024-12-06T16:02:19.590Z] [2024-12-06 17:02:19.463072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.463623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.463654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.463663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.463832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.463990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.463997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.464002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.464008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.475750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.476375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.476406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.476415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.476583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.476738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.476744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.476749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.476755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.488485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.488971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.488987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.488993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.489152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.489305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.489311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.489316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.489321] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.501183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.501756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.897 [2024-12-06 17:02:19.501787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.897 [2024-12-06 17:02:19.501796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.897 [2024-12-06 17:02:19.501964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.897 [2024-12-06 17:02:19.502126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.897 [2024-12-06 17:02:19.502133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.897 [2024-12-06 17:02:19.502142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.897 [2024-12-06 17:02:19.502148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.897 [2024-12-06 17:02:19.513872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.897 [2024-12-06 17:02:19.514498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.898 [2024-12-06 17:02:19.514529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.898 [2024-12-06 17:02:19.514538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.898 [2024-12-06 17:02:19.514706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.898 [2024-12-06 17:02:19.514860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.898 [2024-12-06 17:02:19.514867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.898 [2024-12-06 17:02:19.514872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.898 [2024-12-06 17:02:19.514878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.898 [2024-12-06 17:02:19.526601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.898 [2024-12-06 17:02:19.527148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.898 [2024-12-06 17:02:19.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.898 [2024-12-06 17:02:19.527188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.898 [2024-12-06 17:02:19.527359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.898 [2024-12-06 17:02:19.527514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.898 [2024-12-06 17:02:19.527520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.898 [2024-12-06 17:02:19.527525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.898 [2024-12-06 17:02:19.527531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.898 [2024-12-06 17:02:19.539271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.898 [2024-12-06 17:02:19.539844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.898 [2024-12-06 17:02:19.539875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.898 [2024-12-06 17:02:19.539883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.898 [2024-12-06 17:02:19.540052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.898 [2024-12-06 17:02:19.540214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.898 [2024-12-06 17:02:19.540221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.898 [2024-12-06 17:02:19.540226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.898 [2024-12-06 17:02:19.540232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.898 [2024-12-06 17:02:19.551953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.898 [2024-12-06 17:02:19.552520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.898 [2024-12-06 17:02:19.552551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.898 [2024-12-06 17:02:19.552560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.898 [2024-12-06 17:02:19.552728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.898 [2024-12-06 17:02:19.552882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.898 [2024-12-06 17:02:19.552889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.898 [2024-12-06 17:02:19.552894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.898 [2024-12-06 17:02:19.552900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.898 [2024-12-06 17:02:19.564618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.898 [2024-12-06 17:02:19.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.898 [2024-12-06 17:02:19.565233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.898 [2024-12-06 17:02:19.565241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.898 [2024-12-06 17:02:19.565412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.898 [2024-12-06 17:02:19.565567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.898 [2024-12-06 17:02:19.565573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.898 [2024-12-06 17:02:19.565579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.898 [2024-12-06 17:02:19.565585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:30.898 [2024-12-06 17:02:19.577318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:30.898 [2024-12-06 17:02:19.577815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:30.898 [2024-12-06 17:02:19.577830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:30.898 [2024-12-06 17:02:19.577835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:30.898 [2024-12-06 17:02:19.577988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:30.898 [2024-12-06 17:02:19.578145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:30.898 [2024-12-06 17:02:19.578152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:30.898 [2024-12-06 17:02:19.578157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:30.898 [2024-12-06 17:02:19.578161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.161 [2024-12-06 17:02:19.590027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.161 [2024-12-06 17:02:19.590480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.161 [2024-12-06 17:02:19.590494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.161 [2024-12-06 17:02:19.590503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.161 [2024-12-06 17:02:19.590655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.161 [2024-12-06 17:02:19.590807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.161 [2024-12-06 17:02:19.590813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.161 [2024-12-06 17:02:19.590818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.161 [2024-12-06 17:02:19.590823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.161 [2024-12-06 17:02:19.602699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.161 [2024-12-06 17:02:19.603195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.161 [2024-12-06 17:02:19.603208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.161 [2024-12-06 17:02:19.603214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.161 [2024-12-06 17:02:19.603366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.161 [2024-12-06 17:02:19.603518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.161 [2024-12-06 17:02:19.603523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.161 [2024-12-06 17:02:19.603528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.161 [2024-12-06 17:02:19.603533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.161 [2024-12-06 17:02:19.615401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.161 [2024-12-06 17:02:19.615974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.161 [2024-12-06 17:02:19.616005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.161 [2024-12-06 17:02:19.616013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.161 [2024-12-06 17:02:19.616190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.161 [2024-12-06 17:02:19.616346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.161 [2024-12-06 17:02:19.616353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.161 [2024-12-06 17:02:19.616358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.161 [2024-12-06 17:02:19.616364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.161 [2024-12-06 17:02:19.628085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.161 [2024-12-06 17:02:19.628635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.161 [2024-12-06 17:02:19.628667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.161 [2024-12-06 17:02:19.628675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.161 [2024-12-06 17:02:19.628843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.161 [2024-12-06 17:02:19.629002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.161 [2024-12-06 17:02:19.629008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.161 [2024-12-06 17:02:19.629014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.161 [2024-12-06 17:02:19.629020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.161 [2024-12-06 17:02:19.640746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.161 [2024-12-06 17:02:19.641372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.161 [2024-12-06 17:02:19.641403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.161 [2024-12-06 17:02:19.641412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.161 [2024-12-06 17:02:19.641580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.161 [2024-12-06 17:02:19.641735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.161 [2024-12-06 17:02:19.641741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.161 [2024-12-06 17:02:19.641747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.161 [2024-12-06 17:02:19.641752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.161 [2024-12-06 17:02:19.653485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.161 [2024-12-06 17:02:19.654062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.161 [2024-12-06 17:02:19.654093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.161 [2024-12-06 17:02:19.654110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.161 [2024-12-06 17:02:19.654279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.161 [2024-12-06 17:02:19.654434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.161 [2024-12-06 17:02:19.654440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.161 [2024-12-06 17:02:19.654445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.161 [2024-12-06 17:02:19.654451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.161 [2024-12-06 17:02:19.666181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.161 [2024-12-06 17:02:19.666754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.161 [2024-12-06 17:02:19.666785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.161 [2024-12-06 17:02:19.666794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.161 [2024-12-06 17:02:19.666961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.161 [2024-12-06 17:02:19.667123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.161 [2024-12-06 17:02:19.667130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.161 [2024-12-06 17:02:19.667139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.161 [2024-12-06 17:02:19.667145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.678867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.679441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.679472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.679481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.679649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.679804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.679810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.679816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.679821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.691543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.692122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.692152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.692161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.692332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.692487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.692493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.692499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.692505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.704226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.704720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.704751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.704760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.704928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.705083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.705089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.705094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.705108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.716980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.717555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.717586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.717595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.717763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.717918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.717924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.717929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.717935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.729734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.730223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.730254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.730263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.730433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.730588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.730595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.730600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.730606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.742484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.743110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.743140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.743149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.743319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.743474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.743480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.743486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.743491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.755212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.755784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.755815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.755827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.755995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.756157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.756164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.756169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.756175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.767900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.768456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.768487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.768496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.768664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.768819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.768826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.768831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.768837] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.780562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.781153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.781183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.781192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.781361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.781515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.781522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.781527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.162 [2024-12-06 17:02:19.781533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.162 [2024-12-06 17:02:19.793260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.162 [2024-12-06 17:02:19.793750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.162 [2024-12-06 17:02:19.793781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.162 [2024-12-06 17:02:19.793789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.162 [2024-12-06 17:02:19.793957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.162 [2024-12-06 17:02:19.794123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.162 [2024-12-06 17:02:19.794130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.162 [2024-12-06 17:02:19.794136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.163 [2024-12-06 17:02:19.794141] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.163 [2024-12-06 17:02:19.806005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.163 [2024-12-06 17:02:19.806571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.163 [2024-12-06 17:02:19.806602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.163 [2024-12-06 17:02:19.806611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.163 [2024-12-06 17:02:19.806779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.163 [2024-12-06 17:02:19.806933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.163 [2024-12-06 17:02:19.806940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.163 [2024-12-06 17:02:19.806945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.163 [2024-12-06 17:02:19.806951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.163 [2024-12-06 17:02:19.818685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.163 [2024-12-06 17:02:19.819199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.163 [2024-12-06 17:02:19.819230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.163 [2024-12-06 17:02:19.819239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.163 [2024-12-06 17:02:19.819407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.163 [2024-12-06 17:02:19.819563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.163 [2024-12-06 17:02:19.819569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.163 [2024-12-06 17:02:19.819575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.163 [2024-12-06 17:02:19.819581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.163 [2024-12-06 17:02:19.831448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.163 [2024-12-06 17:02:19.832014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.163 [2024-12-06 17:02:19.832044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.163 [2024-12-06 17:02:19.832053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.163 [2024-12-06 17:02:19.832229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.163 [2024-12-06 17:02:19.832385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.163 [2024-12-06 17:02:19.832391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.163 [2024-12-06 17:02:19.832397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.163 [2024-12-06 17:02:19.832406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.163 [2024-12-06 17:02:19.844126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.163 [2024-12-06 17:02:19.844720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.163 [2024-12-06 17:02:19.844751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.163 [2024-12-06 17:02:19.844759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.163 [2024-12-06 17:02:19.844927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.163 [2024-12-06 17:02:19.845082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.163 [2024-12-06 17:02:19.845089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.163 [2024-12-06 17:02:19.845094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.163 [2024-12-06 17:02:19.845107] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.856829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.857402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.857433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.857441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.857610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.857765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.857771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.857777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.857782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.869521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.870120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.870151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.870160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.870330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.870486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.870492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.870498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.870504] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.882238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.882735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.882750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.882756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.882908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.883061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.883067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.883073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.883078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.894957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.895452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.895466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.895471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.895624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.895777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.895783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.895789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.895795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.907719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.908190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.908203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.908209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.908361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.908513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.908519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.908524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.908529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.920408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.920862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.920875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.920880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.921035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.921192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.921199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.921204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.921208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.933078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.933566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.933579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.933584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.933736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.933888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.933894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.933899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.933903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.945765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.946215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.946228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.946233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.946385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.946537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.946542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.946547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.946552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.438 [2024-12-06 17:02:19.958412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.438 [2024-12-06 17:02:19.958858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.438 [2024-12-06 17:02:19.958870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.438 [2024-12-06 17:02:19.958876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.438 [2024-12-06 17:02:19.959027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.438 [2024-12-06 17:02:19.959183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.438 [2024-12-06 17:02:19.959193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.438 [2024-12-06 17:02:19.959197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.438 [2024-12-06 17:02:19.959203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:19.971068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:19.971636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:19.971667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:19.971676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:19.971844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:19.971999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:19.972006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:19.972011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:19.972017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:19.983755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:19.984704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:19.984723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:19.984730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:19.984889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:19.985042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:19.985048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:19.985053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:19.985058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:19.996524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:19.997013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:19.997027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:19.997032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:19.997189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:19.997342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:19.997347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:19.997353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:19.997361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:20.009262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:20.009734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:20.009748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:20.009754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:20.009907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:20.010066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:20.010073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:20.010078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:20.010083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:20.021966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:20.022428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:20.022443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:20.022451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:20.022605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:20.022758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:20.022766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:20.022774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:20.022782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:20.034669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:20.035110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:20.035124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:20.035129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:20.035281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:20.035434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:20.035440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:20.035444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:20.035449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:20.047353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:20.047863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:20.047876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:20.047881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:20.048033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:20.048191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:20.048197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:20.048203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:20.048208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:20.060064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:20.060605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:20.060634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:20.060644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:20.060812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:20.060967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:20.060975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:20.060981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:20.060987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:20.072732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:20.073079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:20.073095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:20.073106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:20.073261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:20.073413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:20.073419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:20.073425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:20.073429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.439 [2024-12-06 17:02:20.085420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.439 [2024-12-06 17:02:20.085982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.439 [2024-12-06 17:02:20.086013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.439 [2024-12-06 17:02:20.086022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.439 [2024-12-06 17:02:20.086202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.439 [2024-12-06 17:02:20.086362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.439 [2024-12-06 17:02:20.086369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.439 [2024-12-06 17:02:20.086374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.439 [2024-12-06 17:02:20.086380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.440 [2024-12-06 17:02:20.098104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.440 [2024-12-06 17:02:20.098604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-12-06 17:02:20.098619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.440 [2024-12-06 17:02:20.098625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.440 [2024-12-06 17:02:20.098777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.440 [2024-12-06 17:02:20.098930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.440 [2024-12-06 17:02:20.098935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.440 [2024-12-06 17:02:20.098940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.440 [2024-12-06 17:02:20.098945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.440 [2024-12-06 17:02:20.110821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.440 [2024-12-06 17:02:20.111292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-12-06 17:02:20.111306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.440 [2024-12-06 17:02:20.111311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.440 [2024-12-06 17:02:20.111463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.440 [2024-12-06 17:02:20.111615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.440 [2024-12-06 17:02:20.111622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.440 [2024-12-06 17:02:20.111626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.440 [2024-12-06 17:02:20.111631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.440 [2024-12-06 17:02:20.123490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.440 [2024-12-06 17:02:20.123946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.440 [2024-12-06 17:02:20.123959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.440 [2024-12-06 17:02:20.123964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.440 [2024-12-06 17:02:20.124121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.440 [2024-12-06 17:02:20.124274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.440 [2024-12-06 17:02:20.124282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.440 [2024-12-06 17:02:20.124288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.440 [2024-12-06 17:02:20.124293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.702 [2024-12-06 17:02:20.136158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.702 [2024-12-06 17:02:20.136718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.702 [2024-12-06 17:02:20.136749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.702 [2024-12-06 17:02:20.136758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.702 [2024-12-06 17:02:20.136926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.702 [2024-12-06 17:02:20.137081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.702 [2024-12-06 17:02:20.137087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.702 [2024-12-06 17:02:20.137093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.702 [2024-12-06 17:02:20.137098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.702 [2024-12-06 17:02:20.148824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.702 [2024-12-06 17:02:20.149314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.702 [2024-12-06 17:02:20.149330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.702 [2024-12-06 17:02:20.149336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.149489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.149641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.149647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.149652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.149656] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.161534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.161907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.161920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.161925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.162077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.162234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.162241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.162245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.162254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.174301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.174777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.174789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.174795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.174946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.175098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.175115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.175120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.175125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.186987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.187244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.187257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.187262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.187414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.187566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.187572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.187577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.187582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.199742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.200184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.200198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.200203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.200356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.200509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.200514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.200519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.200524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.212443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.212772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.212789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.212794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.212946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.213106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.213112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.213117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.213122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.225148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.225690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.225730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.225898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.226053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.226059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.226065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.226070] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.237819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.238392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.238408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.238414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.238567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.238719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.238725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.238730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.238735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.250489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.251009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.251023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.251028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.251189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.251341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.251347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.251352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.251357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.263238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.263767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.703 [2024-12-06 17:02:20.263780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.703 [2024-12-06 17:02:20.263785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.703 [2024-12-06 17:02:20.263937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.703 [2024-12-06 17:02:20.264089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.703 [2024-12-06 17:02:20.264095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.703 [2024-12-06 17:02:20.264105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.703 [2024-12-06 17:02:20.264110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.703 [2024-12-06 17:02:20.275993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.703 [2024-12-06 17:02:20.276463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.276476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.276481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.276633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.276785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.276790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.276796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.276800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.288682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.289142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.289155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.289161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.289312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.289465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.289473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.289478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.289483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.301368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.301820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.301833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.301838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.301991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.302149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.302155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.302160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.302165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.314060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.314403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.314417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.314422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.314574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.314726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.314731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.314736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.314741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.326786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.327439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.327470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.327479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.327650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.327805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.327812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.327817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.327824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.339556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.340046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.340061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.340067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.340224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.340376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.340382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.340387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.340392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.352255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.352703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.352716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.352721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.352873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.353025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.353031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.353035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.353040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.364924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.365396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.365410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.365416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.365568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.365720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.365725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.365730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.365735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.377633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.378129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.378147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.378152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.378304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.378456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.378462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.378467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.378472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.704 [2024-12-06 17:02:20.390360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.704 [2024-12-06 17:02:20.390808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.704 [2024-12-06 17:02:20.390821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.704 [2024-12-06 17:02:20.390826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.704 [2024-12-06 17:02:20.390978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.704 [2024-12-06 17:02:20.391135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.704 [2024-12-06 17:02:20.391141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.704 [2024-12-06 17:02:20.391146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.704 [2024-12-06 17:02:20.391151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.966 [2024-12-06 17:02:20.403030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.966 [2024-12-06 17:02:20.403616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.966 [2024-12-06 17:02:20.403647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.966 [2024-12-06 17:02:20.403656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.966 [2024-12-06 17:02:20.403824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.966 [2024-12-06 17:02:20.403979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.966 [2024-12-06 17:02:20.403985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.966 [2024-12-06 17:02:20.403990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.966 [2024-12-06 17:02:20.403996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.966 [2024-12-06 17:02:20.415739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.966 [2024-12-06 17:02:20.416271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.966 [2024-12-06 17:02:20.416287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.966 [2024-12-06 17:02:20.416293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.966 [2024-12-06 17:02:20.416450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.966 [2024-12-06 17:02:20.416603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.966 [2024-12-06 17:02:20.416609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.966 [2024-12-06 17:02:20.416614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.966 [2024-12-06 17:02:20.416618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.966 [2024-12-06 17:02:20.428490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.966 [2024-12-06 17:02:20.429063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.966 [2024-12-06 17:02:20.429093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.966 [2024-12-06 17:02:20.429110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.966 [2024-12-06 17:02:20.429281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.966 [2024-12-06 17:02:20.429437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.966 [2024-12-06 17:02:20.429443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.966 [2024-12-06 17:02:20.429448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.966 [2024-12-06 17:02:20.429454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.966 [2024-12-06 17:02:20.441195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.966 [2024-12-06 17:02:20.441770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.966 [2024-12-06 17:02:20.441800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.441809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.441977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.442140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.442148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.442153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.442159] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.453884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.454475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.454506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.454515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.454683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.454838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.454844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.454853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.454859] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 8327.75 IOPS, 32.53 MiB/s [2024-12-06T16:02:20.660Z] [2024-12-06 17:02:20.466569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.467055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.467070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.467077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.467234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.467387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.467393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.467398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.467403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.479264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.479755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.479768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.479773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.479926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.480078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.480084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.480088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.480093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.491955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.492436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.492449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.492454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.492606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.492758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.492764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.492769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.492774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.504591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.505222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.505253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.505262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.505430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.505584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.505591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.505596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.505602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.517349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.517803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.517819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.517824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.517977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.518134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.518141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.518146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.518151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.530029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.530570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.530601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.530610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.530778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.530933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.530939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.530944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.530950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.542689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.543310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.543345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.543353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.543521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.543677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.543683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.543689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.543694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.555423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.556037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.967 [2024-12-06 17:02:20.556068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.967 [2024-12-06 17:02:20.556077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.967 [2024-12-06 17:02:20.556253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.967 [2024-12-06 17:02:20.556409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.967 [2024-12-06 17:02:20.556416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.967 [2024-12-06 17:02:20.556421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.967 [2024-12-06 17:02:20.556427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.967 [2024-12-06 17:02:20.568169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.967 [2024-12-06 17:02:20.568643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.568673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.568682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.968 [2024-12-06 17:02:20.568850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.968 [2024-12-06 17:02:20.569005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.968 [2024-12-06 17:02:20.569011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.968 [2024-12-06 17:02:20.569017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.968 [2024-12-06 17:02:20.569023] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.968 [2024-12-06 17:02:20.580909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.968 [2024-12-06 17:02:20.581456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.581488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.581497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.968 [2024-12-06 17:02:20.581674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.968 [2024-12-06 17:02:20.581829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.968 [2024-12-06 17:02:20.581836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.968 [2024-12-06 17:02:20.581842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.968 [2024-12-06 17:02:20.581848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.968 [2024-12-06 17:02:20.593588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.968 [2024-12-06 17:02:20.594185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.594216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.594224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.968 [2024-12-06 17:02:20.594395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.968 [2024-12-06 17:02:20.594550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.968 [2024-12-06 17:02:20.594557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.968 [2024-12-06 17:02:20.594562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.968 [2024-12-06 17:02:20.594568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.968 [2024-12-06 17:02:20.606292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.968 [2024-12-06 17:02:20.606786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.606801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.606807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.968 [2024-12-06 17:02:20.606960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.968 [2024-12-06 17:02:20.607117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.968 [2024-12-06 17:02:20.607124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.968 [2024-12-06 17:02:20.607129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.968 [2024-12-06 17:02:20.607133] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.968 [2024-12-06 17:02:20.618999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.968 [2024-12-06 17:02:20.619551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.619582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.619591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.968 [2024-12-06 17:02:20.619759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.968 [2024-12-06 17:02:20.619914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.968 [2024-12-06 17:02:20.619921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.968 [2024-12-06 17:02:20.619930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.968 [2024-12-06 17:02:20.619935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.968 [2024-12-06 17:02:20.631674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.968 [2024-12-06 17:02:20.632205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.632235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.632244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.968 [2024-12-06 17:02:20.632415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.968 [2024-12-06 17:02:20.632570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.968 [2024-12-06 17:02:20.632576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.968 [2024-12-06 17:02:20.632582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.968 [2024-12-06 17:02:20.632588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.968 [2024-12-06 17:02:20.644328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.968 [2024-12-06 17:02:20.644910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.644941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.644950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:31.968 [2024-12-06 17:02:20.645125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:31.968 [2024-12-06 17:02:20.645281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:31.968 [2024-12-06 17:02:20.645288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:31.968 [2024-12-06 17:02:20.645293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:31.968 [2024-12-06 17:02:20.645299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:31.968 [2024-12-06 17:02:20.657027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:31.968 [2024-12-06 17:02:20.657524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:31.968 [2024-12-06 17:02:20.657539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:31.968 [2024-12-06 17:02:20.657545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.231 [2024-12-06 17:02:20.657697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.231 [2024-12-06 17:02:20.657851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.231 [2024-12-06 17:02:20.657858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.231 [2024-12-06 17:02:20.657863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.231 [2024-12-06 17:02:20.657867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.231 [2024-12-06 17:02:20.669748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.231 [2024-12-06 17:02:20.670314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.231 [2024-12-06 17:02:20.670345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.231 [2024-12-06 17:02:20.670354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.231 [2024-12-06 17:02:20.670522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.231 [2024-12-06 17:02:20.670677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.231 [2024-12-06 17:02:20.670683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.231 [2024-12-06 17:02:20.670689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.231 [2024-12-06 17:02:20.670694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.231 [2024-12-06 17:02:20.682435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.231 [2024-12-06 17:02:20.683014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.683045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.683053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.683232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.683387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.683394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.683400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.683406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.695145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.695687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.695717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.695726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.695894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.696049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.696055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.696060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.696066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.707806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.708363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.708393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.708405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.708574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.708729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.708735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.708740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.708746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.720484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.721063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.721093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.721109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.721281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.721436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.721442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.721448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.721454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.733183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.733728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.733759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.733768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.733936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.734091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.734098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.734112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.734118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.745857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.746434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.746465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.746474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.746642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.746801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.746808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.746814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.746819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.758503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.759082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.759120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.759128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.759296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.759452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.759458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.759463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.759470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.771210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.771788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.771819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.771828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.771996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.772160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.772167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.772172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.772178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.783919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.784505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.784536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.784545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.784713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.784869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.784875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.784884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.784889] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.796625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.797254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.232 [2024-12-06 17:02:20.797285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.232 [2024-12-06 17:02:20.797294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.232 [2024-12-06 17:02:20.797462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.232 [2024-12-06 17:02:20.797618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.232 [2024-12-06 17:02:20.797624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.232 [2024-12-06 17:02:20.797630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.232 [2024-12-06 17:02:20.797635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.232 [2024-12-06 17:02:20.809386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.232 [2024-12-06 17:02:20.809958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.809988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.809997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.810173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.810328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.810335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.810341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.810347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.822087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.822651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.822682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.822691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.822859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.823014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.823020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.823025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.823031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.834770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.835431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.835462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.835471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.835639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.835794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.835801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.835806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.835812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.847534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.848025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.848040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.848046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.848204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.848356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.848362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.848367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.848372] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.860224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.860813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.860843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.860852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.861020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.861184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.861191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.861197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.861202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.872928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.873416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.873431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.873440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.873593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.873745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.873751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.873757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.873762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.885641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.886252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.886283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.886292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.886460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.886615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.886621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.886626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.886632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.898405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.898879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.898894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.898900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.899052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.899211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.899217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.899222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.899227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.233 [2024-12-06 17:02:20.911104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.233 [2024-12-06 17:02:20.911581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.233 [2024-12-06 17:02:20.911612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.233 [2024-12-06 17:02:20.911621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.233 [2024-12-06 17:02:20.911792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.233 [2024-12-06 17:02:20.911951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.233 [2024-12-06 17:02:20.911957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.233 [2024-12-06 17:02:20.911963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.233 [2024-12-06 17:02:20.911968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.496 [2024-12-06 17:02:20.923858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.496 [2024-12-06 17:02:20.924468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.496 [2024-12-06 17:02:20.924499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.496 [2024-12-06 17:02:20.924508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.496 [2024-12-06 17:02:20.924676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.496 [2024-12-06 17:02:20.924831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.496 [2024-12-06 17:02:20.924838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.496 [2024-12-06 17:02:20.924843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:20.924849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:20.936599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:20.937105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:20.937120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:20.937126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:20.937279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:20.937431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:20.937436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:20.937441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:20.937446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:20.949315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:20.949902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:20.949933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:20.949942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:20.950118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:20.950273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:20.950280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:20.950290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:20.950295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:20.962024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:20.962385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:20.962401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:20.962407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:20.962560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:20.962712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:20.962718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:20.962723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:20.962727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:20.974752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:20.975350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:20.975381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:20.975390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:20.975558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:20.975713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:20.975719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:20.975725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:20.975730] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:20.987466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:20.988053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:20.988083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:20.988092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:20.988269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:20.988425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:20.988432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:20.988438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:20.988444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:21.000181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:21.000764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:21.000794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:21.000803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:21.000971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:21.001134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:21.001141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:21.001146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:21.001152] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:21.012922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:21.013508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:21.013540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:21.013548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:21.013724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:21.013880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:21.013887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:21.013892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:21.013898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:21.025636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:21.026255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:21.026286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:21.026295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:21.026463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:21.026618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.497 [2024-12-06 17:02:21.026625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.497 [2024-12-06 17:02:21.026630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.497 [2024-12-06 17:02:21.026635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.497 [2024-12-06 17:02:21.038364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.497 [2024-12-06 17:02:21.038935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.497 [2024-12-06 17:02:21.038966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.497 [2024-12-06 17:02:21.038977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.497 [2024-12-06 17:02:21.039153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.497 [2024-12-06 17:02:21.039309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.039316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.039321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.039327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.051045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.051621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.051651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.051660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.051828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.051983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.051989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.051995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.052001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.063723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.064216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.064246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.064255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.064426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.064581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.064587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.064592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.064598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.076485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.077082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.077120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.077129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.077297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.077457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.077464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.077469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.077475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.089196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.089708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.089724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.089730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.089883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.090037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.090043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.090048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.090053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.101919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.102378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.102393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.102398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.102551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.102704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.102711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.102716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.102721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.114672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.115140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.115162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.115168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.115325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.115480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.115486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.115496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.115501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.127363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.127952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.127984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.127993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.128169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.128326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.128333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.128339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.128345] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.140066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.140663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.140695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.140704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.140872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.141028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.141035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.141041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.141047] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.152767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.153368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.153400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.153409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.153577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.153733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.498 [2024-12-06 17:02:21.153740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.498 [2024-12-06 17:02:21.153746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.498 [2024-12-06 17:02:21.153751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.498 [2024-12-06 17:02:21.165472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.498 [2024-12-06 17:02:21.166080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.498 [2024-12-06 17:02:21.166118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.498 [2024-12-06 17:02:21.166127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.498 [2024-12-06 17:02:21.166297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.498 [2024-12-06 17:02:21.166453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.499 [2024-12-06 17:02:21.166460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.499 [2024-12-06 17:02:21.166466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.499 [2024-12-06 17:02:21.166472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.499 [2024-12-06 17:02:21.178198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.499 [2024-12-06 17:02:21.178777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.499 [2024-12-06 17:02:21.178809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.499 [2024-12-06 17:02:21.178818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.499 [2024-12-06 17:02:21.178986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.499 [2024-12-06 17:02:21.179150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.499 [2024-12-06 17:02:21.179158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.499 [2024-12-06 17:02:21.179163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.499 [2024-12-06 17:02:21.179169] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.190896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.191449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.191481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.191490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.191659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.191814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.761 [2024-12-06 17:02:21.191822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.761 [2024-12-06 17:02:21.191828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.761 [2024-12-06 17:02:21.191834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.203564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.204164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.204198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.204215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.204387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.204543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.761 [2024-12-06 17:02:21.204551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.761 [2024-12-06 17:02:21.204556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.761 [2024-12-06 17:02:21.204563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.216310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.216886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.216918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.216927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.217095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.217258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.761 [2024-12-06 17:02:21.217265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.761 [2024-12-06 17:02:21.217271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.761 [2024-12-06 17:02:21.217277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.229006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.229573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.229605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.229614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.229782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.229938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.761 [2024-12-06 17:02:21.229946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.761 [2024-12-06 17:02:21.229952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.761 [2024-12-06 17:02:21.229958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.241699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.242232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.242264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.242274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.242442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.242599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.761 [2024-12-06 17:02:21.242611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.761 [2024-12-06 17:02:21.242616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.761 [2024-12-06 17:02:21.242623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.254367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.254961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.254993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.255002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.255178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.255335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.761 [2024-12-06 17:02:21.255343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.761 [2024-12-06 17:02:21.255349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.761 [2024-12-06 17:02:21.255356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.267081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.267639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.267671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.267681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.267850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.268007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.761 [2024-12-06 17:02:21.268014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.761 [2024-12-06 17:02:21.268021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.761 [2024-12-06 17:02:21.268027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.761 [2024-12-06 17:02:21.279772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.761 [2024-12-06 17:02:21.280241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.761 [2024-12-06 17:02:21.280258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.761 [2024-12-06 17:02:21.280264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.761 [2024-12-06 17:02:21.280418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.761 [2024-12-06 17:02:21.280573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.280580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.280586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.280595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.292465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.292923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.292938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.292944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.293097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.293255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.293262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.293267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.293273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.305134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.305620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.305634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.305641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.305794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.305947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.305955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.305960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.305965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.317830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.318365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.318379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.318385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.318539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.318693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.318700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.318705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.318710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.330582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.331145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.331177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.331187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.331357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.331514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.331522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.331528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.331534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.343281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.343740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.343773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.343783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.343952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.344116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.344124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.344130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.344137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.356010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.356584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.356616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.356626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.356795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.356951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.356959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.356966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.356972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.368700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.369225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.369258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.369267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.369441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.369597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.369605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.369611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.369618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.381351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.381934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.381966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.381976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.382152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.382309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.382317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.382323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.382329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.394051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.394612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.394645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.394654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.394823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.762 [2024-12-06 17:02:21.394980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.762 [2024-12-06 17:02:21.394988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.762 [2024-12-06 17:02:21.394994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.762 [2024-12-06 17:02:21.395000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.762 [2024-12-06 17:02:21.406742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.762 [2024-12-06 17:02:21.407228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.762 [2024-12-06 17:02:21.407260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.762 [2024-12-06 17:02:21.407271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.762 [2024-12-06 17:02:21.407441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.763 [2024-12-06 17:02:21.407598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.763 [2024-12-06 17:02:21.407610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.763 [2024-12-06 17:02:21.407616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.763 [2024-12-06 17:02:21.407623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.763 [2024-12-06 17:02:21.419510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.763 [2024-12-06 17:02:21.419962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.763 [2024-12-06 17:02:21.419978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.763 [2024-12-06 17:02:21.419984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.763 [2024-12-06 17:02:21.420143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.763 [2024-12-06 17:02:21.420298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.763 [2024-12-06 17:02:21.420305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.763 [2024-12-06 17:02:21.420310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.763 [2024-12-06 17:02:21.420316] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.763 [2024-12-06 17:02:21.432185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.763 [2024-12-06 17:02:21.432631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.763 [2024-12-06 17:02:21.432645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.763 [2024-12-06 17:02:21.432652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.763 [2024-12-06 17:02:21.432804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.763 [2024-12-06 17:02:21.432958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.763 [2024-12-06 17:02:21.432965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.763 [2024-12-06 17:02:21.432970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.763 [2024-12-06 17:02:21.432975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:32.763 [2024-12-06 17:02:21.444873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:32.763 [2024-12-06 17:02:21.445448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:32.763 [2024-12-06 17:02:21.445480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:32.763 [2024-12-06 17:02:21.445490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:32.763 [2024-12-06 17:02:21.445660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:32.763 [2024-12-06 17:02:21.445816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:32.763 [2024-12-06 17:02:21.445824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:32.763 [2024-12-06 17:02:21.445830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:32.763 [2024-12-06 17:02:21.445840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.025 [2024-12-06 17:02:21.457583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.025 [2024-12-06 17:02:21.458045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.025 [2024-12-06 17:02:21.458062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.025 [2024-12-06 17:02:21.458069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.025 [2024-12-06 17:02:21.458226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.025 [2024-12-06 17:02:21.458381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.025 [2024-12-06 17:02:21.458389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.025 [2024-12-06 17:02:21.458395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.025 [2024-12-06 17:02:21.458401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.025 6662.20 IOPS, 26.02 MiB/s [2024-12-06T16:02:21.718Z] [2024-12-06 17:02:21.470253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.025 [2024-12-06 17:02:21.470704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.025 [2024-12-06 17:02:21.470719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.025 [2024-12-06 17:02:21.470725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.025 [2024-12-06 17:02:21.470878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.025 [2024-12-06 17:02:21.471032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.471039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.471044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.471049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.482926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.026 [2024-12-06 17:02:21.483404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.026 [2024-12-06 17:02:21.483419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.026 [2024-12-06 17:02:21.483425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.026 [2024-12-06 17:02:21.483578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.026 [2024-12-06 17:02:21.483732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.483739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.483744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.483750] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.495620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.026 [2024-12-06 17:02:21.496073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.026 [2024-12-06 17:02:21.496087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.026 [2024-12-06 17:02:21.496093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.026 [2024-12-06 17:02:21.496249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.026 [2024-12-06 17:02:21.496403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.496411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.496416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.496422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.508289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.026 [2024-12-06 17:02:21.508781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.026 [2024-12-06 17:02:21.508795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.026 [2024-12-06 17:02:21.508801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.026 [2024-12-06 17:02:21.508954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.026 [2024-12-06 17:02:21.509111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.509119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.509124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.509130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.520996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.026 [2024-12-06 17:02:21.521451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.026 [2024-12-06 17:02:21.521466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.026 [2024-12-06 17:02:21.521472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.026 [2024-12-06 17:02:21.521625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.026 [2024-12-06 17:02:21.521779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.521786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.521791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.521797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.533666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.026 [2024-12-06 17:02:21.533975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.026 [2024-12-06 17:02:21.533991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.026 [2024-12-06 17:02:21.533998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.026 [2024-12-06 17:02:21.534160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.026 [2024-12-06 17:02:21.534315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.534323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.534329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.534335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.546343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.026 [2024-12-06 17:02:21.546951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.026 [2024-12-06 17:02:21.546983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.026 [2024-12-06 17:02:21.546993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.026 [2024-12-06 17:02:21.547168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.026 [2024-12-06 17:02:21.547325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.547333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.547340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.547347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.559080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.026 [2024-12-06 17:02:21.559410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.026 [2024-12-06 17:02:21.559427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.026 [2024-12-06 17:02:21.559433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.026 [2024-12-06 17:02:21.559587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.026 [2024-12-06 17:02:21.559742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.026 [2024-12-06 17:02:21.559749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.026 [2024-12-06 17:02:21.559755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.026 [2024-12-06 17:02:21.559760] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.026 [2024-12-06 17:02:21.571776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.572250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.572265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.572271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.027 [2024-12-06 17:02:21.572424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.027 [2024-12-06 17:02:21.572578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.027 [2024-12-06 17:02:21.572589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.027 [2024-12-06 17:02:21.572595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.027 [2024-12-06 17:02:21.572600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.027 [2024-12-06 17:02:21.584481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.584927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.584941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.584947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.027 [2024-12-06 17:02:21.585104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.027 [2024-12-06 17:02:21.585259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.027 [2024-12-06 17:02:21.585266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.027 [2024-12-06 17:02:21.585271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.027 [2024-12-06 17:02:21.585277] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.027 [2024-12-06 17:02:21.597143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.597589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.597603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.597609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.027 [2024-12-06 17:02:21.597762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.027 [2024-12-06 17:02:21.597916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.027 [2024-12-06 17:02:21.597924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.027 [2024-12-06 17:02:21.597930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.027 [2024-12-06 17:02:21.597936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.027 [2024-12-06 17:02:21.609812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.610273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.610287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.610293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.027 [2024-12-06 17:02:21.610446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.027 [2024-12-06 17:02:21.610599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.027 [2024-12-06 17:02:21.610606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.027 [2024-12-06 17:02:21.610612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.027 [2024-12-06 17:02:21.610621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.027 [2024-12-06 17:02:21.622505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.622951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.622965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.622972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.027 [2024-12-06 17:02:21.623129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.027 [2024-12-06 17:02:21.623284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.027 [2024-12-06 17:02:21.623291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.027 [2024-12-06 17:02:21.623297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.027 [2024-12-06 17:02:21.623303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.027 [2024-12-06 17:02:21.635183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.635632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.635646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.635652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.027 [2024-12-06 17:02:21.635805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.027 [2024-12-06 17:02:21.635958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.027 [2024-12-06 17:02:21.635965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.027 [2024-12-06 17:02:21.635971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.027 [2024-12-06 17:02:21.635976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.027 [2024-12-06 17:02:21.647873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.648304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.648318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.648324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.027 [2024-12-06 17:02:21.648477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.027 [2024-12-06 17:02:21.648631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.027 [2024-12-06 17:02:21.648639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.027 [2024-12-06 17:02:21.648644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.027 [2024-12-06 17:02:21.648649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.027 [2024-12-06 17:02:21.660517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.027 [2024-12-06 17:02:21.660970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.027 [2024-12-06 17:02:21.660988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.027 [2024-12-06 17:02:21.660994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.028 [2024-12-06 17:02:21.661150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.028 [2024-12-06 17:02:21.661304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.028 [2024-12-06 17:02:21.661311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.028 [2024-12-06 17:02:21.661316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.028 [2024-12-06 17:02:21.661322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.028 [2024-12-06 17:02:21.673189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.028 [2024-12-06 17:02:21.673673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.028 [2024-12-06 17:02:21.673687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.028 [2024-12-06 17:02:21.673693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.028 [2024-12-06 17:02:21.673846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.028 [2024-12-06 17:02:21.673999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.028 [2024-12-06 17:02:21.674006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.028 [2024-12-06 17:02:21.674012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.028 [2024-12-06 17:02:21.674017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.028 [2024-12-06 17:02:21.685904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.028 [2024-12-06 17:02:21.686366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.028 [2024-12-06 17:02:21.686381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.028 [2024-12-06 17:02:21.686387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.028 [2024-12-06 17:02:21.686539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.028 [2024-12-06 17:02:21.686693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.028 [2024-12-06 17:02:21.686700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.028 [2024-12-06 17:02:21.686706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.028 [2024-12-06 17:02:21.686712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.028 [2024-12-06 17:02:21.698578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.028 [2024-12-06 17:02:21.699024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.028 [2024-12-06 17:02:21.699038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.028 [2024-12-06 17:02:21.699044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.028 [2024-12-06 17:02:21.699204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.028 [2024-12-06 17:02:21.699357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.028 [2024-12-06 17:02:21.699365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.028 [2024-12-06 17:02:21.699370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.028 [2024-12-06 17:02:21.699376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.028 [2024-12-06 17:02:21.711280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.028 [2024-12-06 17:02:21.711614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.028 [2024-12-06 17:02:21.711630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.028 [2024-12-06 17:02:21.711636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.028 [2024-12-06 17:02:21.711789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.028 [2024-12-06 17:02:21.711943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.028 [2024-12-06 17:02:21.711950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.028 [2024-12-06 17:02:21.711956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.028 [2024-12-06 17:02:21.711962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.723986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.724477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.724491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.724497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.724650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.724804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.724812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.724817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.724822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.736694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.737024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.737038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.737044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.737202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.737356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.737366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.737372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.737377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.749402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.749813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.749827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.749834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.749986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.750144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.750152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.750157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.750162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.762174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.762615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.762629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.762635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.762787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.762940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.762947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.762953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.762958] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.774824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.775308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.775322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.775328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.775481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.775635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.775643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.775648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.775654] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.787532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.788018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.788032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.788038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.788194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.788349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.788357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.788362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.788368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.800228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.800809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.800841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.800851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.801020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.801183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.801192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.801198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.801205] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.812934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.813527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.813559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.813569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.813739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.813896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.813904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.813911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.813917] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.825656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.826120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.826145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.826151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.826305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.826459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.826466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.826472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.826477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.838346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.838905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.838937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.838946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.839120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.839277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.839285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.839292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.839298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.851023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.851550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.851567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.851573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.851727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.851882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.851889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.851895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.851901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.863776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.864226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.864240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.864246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.864403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.864556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.864564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.864569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.864575] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.876450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.876902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.876917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.876923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.877076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.877243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.877251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.291 [2024-12-06 17:02:21.877256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.291 [2024-12-06 17:02:21.877262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.291 [2024-12-06 17:02:21.889133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.291 [2024-12-06 17:02:21.889716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.291 [2024-12-06 17:02:21.889749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.291 [2024-12-06 17:02:21.889758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.291 [2024-12-06 17:02:21.889927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.291 [2024-12-06 17:02:21.890083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.291 [2024-12-06 17:02:21.890091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.890097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.890111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.292 [2024-12-06 17:02:21.901842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.292 [2024-12-06 17:02:21.902298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.292 [2024-12-06 17:02:21.902315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.292 [2024-12-06 17:02:21.902322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.292 [2024-12-06 17:02:21.902475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.292 [2024-12-06 17:02:21.902629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.292 [2024-12-06 17:02:21.902636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.902645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.902650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.292 [2024-12-06 17:02:21.914527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.292 [2024-12-06 17:02:21.914981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.292 [2024-12-06 17:02:21.914996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.292 [2024-12-06 17:02:21.915002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.292 [2024-12-06 17:02:21.915158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.292 [2024-12-06 17:02:21.915313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.292 [2024-12-06 17:02:21.915320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.915326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.915331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.292 [2024-12-06 17:02:21.927210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.292 [2024-12-06 17:02:21.927661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.292 [2024-12-06 17:02:21.927676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.292 [2024-12-06 17:02:21.927682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.292 [2024-12-06 17:02:21.927835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.292 [2024-12-06 17:02:21.927988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.292 [2024-12-06 17:02:21.927996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.928001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.928006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.292 [2024-12-06 17:02:21.939868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.292 [2024-12-06 17:02:21.940425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.292 [2024-12-06 17:02:21.940458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.292 [2024-12-06 17:02:21.940467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.292 [2024-12-06 17:02:21.940635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.292 [2024-12-06 17:02:21.940792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.292 [2024-12-06 17:02:21.940800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.940806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.940812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.292 [2024-12-06 17:02:21.952565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.292 [2024-12-06 17:02:21.953158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.292 [2024-12-06 17:02:21.953190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.292 [2024-12-06 17:02:21.953199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.292 [2024-12-06 17:02:21.953371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.292 [2024-12-06 17:02:21.953527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.292 [2024-12-06 17:02:21.953536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.953542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.953548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.292 [2024-12-06 17:02:21.965286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.292 [2024-12-06 17:02:21.965885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.292 [2024-12-06 17:02:21.965917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.292 [2024-12-06 17:02:21.965927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.292 [2024-12-06 17:02:21.966096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.292 [2024-12-06 17:02:21.966259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.292 [2024-12-06 17:02:21.966267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.966273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.966279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.292 [2024-12-06 17:02:21.978010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.292 [2024-12-06 17:02:21.978610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.292 [2024-12-06 17:02:21.978643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.292 [2024-12-06 17:02:21.978653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.292 [2024-12-06 17:02:21.978821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.292 [2024-12-06 17:02:21.978978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.292 [2024-12-06 17:02:21.978986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.292 [2024-12-06 17:02:21.978992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.292 [2024-12-06 17:02:21.978999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.553 [2024-12-06 17:02:21.990731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.553 [2024-12-06 17:02:21.991201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:21.991237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:21.991247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:21.991415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:21.991572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:21.991580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:21.991586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:21.991593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 [2024-12-06 17:02:22.003471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.003965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:22.003997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:22.004007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:22.004184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:22.004341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:22.004349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:22.004354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:22.004361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 [2024-12-06 17:02:22.016237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.016695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:22.016711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:22.016717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:22.016877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:22.017032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:22.017039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:22.017045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:22.017050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 [2024-12-06 17:02:22.028921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.029379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:22.029393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:22.029400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:22.029552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:22.029710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:22.029717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:22.029723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:22.029729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 [2024-12-06 17:02:22.041632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.042218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:22.042250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:22.042261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:22.042429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:22.042586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:22.042594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:22.042600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:22.042607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2507431 Killed "${NVMF_APP[@]}" "$@" 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2509120 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2509120 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2509120 ']' 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.554 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.554 [2024-12-06 17:02:22.054347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.054953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:22.054985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:22.054995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:22.055173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:22.055331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:22.055339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:22.055346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:22.055352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 [2024-12-06 17:02:22.067074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.067586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:22.067603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:22.067609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:22.067763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:22.067917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:22.067924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:22.067930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:22.067936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 [2024-12-06 17:02:22.079808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.080237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.554 [2024-12-06 17:02:22.080270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.554 [2024-12-06 17:02:22.080280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.554 [2024-12-06 17:02:22.080451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.554 [2024-12-06 17:02:22.080608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.554 [2024-12-06 17:02:22.080616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.554 [2024-12-06 17:02:22.080622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.554 [2024-12-06 17:02:22.080629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.554 [2024-12-06 17:02:22.086495] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:33.554 [2024-12-06 17:02:22.086542] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:33.554 [2024-12-06 17:02:22.092501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.554 [2024-12-06 17:02:22.093089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.093127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.093140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.093309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.093466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.093473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.093479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.093486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.105210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.105556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.105573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.105580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.105734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.105887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.105895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.105900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.105906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.117942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.118456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.118470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.118477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.118630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.118784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.118791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.118796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.118801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.130666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.131150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.131172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.131179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.131338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.131493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.131503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.131508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.131514] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.143391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.143973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.144005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.144015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.144274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.144432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.144439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.144446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.144453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.156037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.156616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.156649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.156659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.156828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.156984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.156992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.156998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.157005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.157899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:33.555 [2024-12-06 17:02:22.168754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.169398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.169432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.169442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.169613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.169770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.169778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.169789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.169795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.173637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:33.555 [2024-12-06 17:02:22.173660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:33.555 [2024-12-06 17:02:22.173667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:33.555 [2024-12-06 17:02:22.173673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:33.555 [2024-12-06 17:02:22.173678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:33.555 [2024-12-06 17:02:22.174801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:33.555 [2024-12-06 17:02:22.174956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.555 [2024-12-06 17:02:22.174958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:33.555 [2024-12-06 17:02:22.181407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.181935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.181953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.181960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.182118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.182273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.182281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.182286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.182292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.194176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.194658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.555 [2024-12-06 17:02:22.194673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.555 [2024-12-06 17:02:22.194681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.555 [2024-12-06 17:02:22.194834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.555 [2024-12-06 17:02:22.194989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.555 [2024-12-06 17:02:22.194996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.555 [2024-12-06 17:02:22.195002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.555 [2024-12-06 17:02:22.195007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.555 [2024-12-06 17:02:22.207049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.555 [2024-12-06 17:02:22.207682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.556 [2024-12-06 17:02:22.207719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.556 [2024-12-06 17:02:22.207734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.556 [2024-12-06 17:02:22.207908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.556 [2024-12-06 17:02:22.208066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.556 [2024-12-06 17:02:22.208074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.556 [2024-12-06 17:02:22.208080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.556 [2024-12-06 17:02:22.208087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.556 [2024-12-06 17:02:22.219701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.556 [2024-12-06 17:02:22.220111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.556 [2024-12-06 17:02:22.220129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.556 [2024-12-06 17:02:22.220135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.556 [2024-12-06 17:02:22.220289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.556 [2024-12-06 17:02:22.220444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.556 [2024-12-06 17:02:22.220451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.556 [2024-12-06 17:02:22.220457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.556 [2024-12-06 17:02:22.220463] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.556 [2024-12-06 17:02:22.232473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.556 [2024-12-06 17:02:22.232939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.556 [2024-12-06 17:02:22.232953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.556 [2024-12-06 17:02:22.232960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.556 [2024-12-06 17:02:22.233118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.556 [2024-12-06 17:02:22.233273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.556 [2024-12-06 17:02:22.233281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.556 [2024-12-06 17:02:22.233286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.556 [2024-12-06 17:02:22.233292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.556 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.556 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:33.556 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:33.556 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.556 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.817 [2024-12-06 17:02:22.245154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.817 [2024-12-06 17:02:22.245750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-12-06 17:02:22.245787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.817 [2024-12-06 17:02:22.245797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.817 [2024-12-06 17:02:22.245968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.817 [2024-12-06 17:02:22.246131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.817 [2024-12-06 17:02:22.246140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.817 [2024-12-06 17:02:22.246146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.817 [2024-12-06 17:02:22.246153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.817 [2024-12-06 17:02:22.257884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.817 [2024-12-06 17:02:22.258443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-12-06 17:02:22.258476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.817 [2024-12-06 17:02:22.258486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.817 [2024-12-06 17:02:22.258655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.817 [2024-12-06 17:02:22.258811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.817 [2024-12-06 17:02:22.258819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.817 [2024-12-06 17:02:22.258826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.817 [2024-12-06 17:02:22.258832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.817 [2024-12-06 17:02:22.270569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.817 [2024-12-06 17:02:22.271211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-12-06 17:02:22.271243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.817 [2024-12-06 17:02:22.271253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.817 [2024-12-06 17:02:22.271423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.817 [2024-12-06 17:02:22.271579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.817 [2024-12-06 17:02:22.271587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.817 [2024-12-06 17:02:22.271593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.817 [2024-12-06 17:02:22.271601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.817 [2024-12-06 17:02:22.273113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.817 [2024-12-06 17:02:22.283351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.817 [2024-12-06 17:02:22.283924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-12-06 17:02:22.283956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.817 [2024-12-06 17:02:22.283967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.817 [2024-12-06 17:02:22.284144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.817 [2024-12-06 17:02:22.284301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.817 [2024-12-06 17:02:22.284309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.817 [2024-12-06 17:02:22.284315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.817 [2024-12-06 17:02:22.284322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.817 [2024-12-06 17:02:22.296037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.817 [2024-12-06 17:02:22.296625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-12-06 17:02:22.296658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.817 [2024-12-06 17:02:22.296668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.817 [2024-12-06 17:02:22.296837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.817 [2024-12-06 17:02:22.296994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.817 [2024-12-06 17:02:22.297001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.817 [2024-12-06 17:02:22.297008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.817 [2024-12-06 17:02:22.297014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.817 Malloc0 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.817 [2024-12-06 17:02:22.308752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.817 [2024-12-06 17:02:22.309352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:33.817 [2024-12-06 17:02:22.309385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcfd660 with addr=10.0.0.2, port=4420 00:35:33.817 [2024-12-06 17:02:22.309395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd660 is same with the state(6) to be set 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:33.817 [2024-12-06 17:02:22.309563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfd660 (9): Bad file descriptor 00:35:33.817 [2024-12-06 17:02:22.309720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.817 [2024-12-06 17:02:22.309729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:33.817 [2024-12-06 17:02:22.309735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:33.817 [2024-12-06 17:02:22.309742] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.817 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.818 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.818 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.818 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:33.818 [2024-12-06 17:02:22.320774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.818 [2024-12-06 17:02:22.321484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:33.818 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.818 17:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2507784 00:35:33.818 [2024-12-06 17:02:22.351787] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:35.197 5790.17 IOPS, 22.62 MiB/s [2024-12-06T16:02:24.826Z] 6824.57 IOPS, 26.66 MiB/s [2024-12-06T16:02:25.762Z] 7619.12 IOPS, 29.76 MiB/s [2024-12-06T16:02:26.697Z] 8225.89 IOPS, 32.13 MiB/s [2024-12-06T16:02:27.647Z] 8715.40 IOPS, 34.04 MiB/s [2024-12-06T16:02:28.582Z] 9114.73 IOPS, 35.60 MiB/s [2024-12-06T16:02:29.516Z] 9453.58 IOPS, 36.93 MiB/s [2024-12-06T16:02:30.891Z] 9720.69 IOPS, 37.97 MiB/s [2024-12-06T16:02:31.826Z] 9958.14 IOPS, 38.90 MiB/s [2024-12-06T16:02:31.826Z] 10166.00 IOPS, 39.71 MiB/s 00:35:43.133 Latency(us) 00:35:43.133 [2024-12-06T16:02:31.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:43.133 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:43.133 Verification LBA range: start 0x0 length 0x4000 00:35:43.133 Nvme1n1 : 15.00 10163.72 39.70 11673.16 0.00 5843.95 576.85 12779.52 00:35:43.133 [2024-12-06T16:02:31.826Z] =================================================================================================================== 00:35:43.133 [2024-12-06T16:02:31.826Z] Total : 10163.72 39.70 11673.16 0.00 5843.95 576.85 12779.52 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:43.133 rmmod nvme_tcp 00:35:43.133 rmmod nvme_fabrics 00:35:43.133 rmmod nvme_keyring 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2509120 ']' 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2509120 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2509120 ']' 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2509120 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509120 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509120' 00:35:43.133 killing process with pid 2509120 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2509120 00:35:43.133 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2509120 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:43.392 17:02:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:45.291 00:35:45.291 real 0m24.917s 00:35:45.291 user 0m59.640s 00:35:45.291 sys 0m5.675s 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.291 ************************************ 00:35:45.291 END TEST nvmf_bdevperf 00:35:45.291 ************************************ 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:45.291 ************************************ 00:35:45.291 START TEST nvmf_target_disconnect 00:35:45.291 ************************************ 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:35:45.291 * Looking for test storage... 00:35:45.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:35:45.291 17:02:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:45.550 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:45.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.551 --rc genhtml_branch_coverage=1 00:35:45.551 --rc genhtml_function_coverage=1 00:35:45.551 --rc genhtml_legend=1 00:35:45.551 --rc geninfo_all_blocks=1 00:35:45.551 --rc geninfo_unexecuted_blocks=1 00:35:45.551 00:35:45.551 ' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:45.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.551 --rc genhtml_branch_coverage=1 00:35:45.551 --rc genhtml_function_coverage=1 00:35:45.551 --rc genhtml_legend=1 00:35:45.551 --rc geninfo_all_blocks=1 00:35:45.551 --rc geninfo_unexecuted_blocks=1 00:35:45.551 00:35:45.551 ' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:45.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.551 --rc genhtml_branch_coverage=1 00:35:45.551 --rc genhtml_function_coverage=1 00:35:45.551 --rc genhtml_legend=1 00:35:45.551 --rc geninfo_all_blocks=1 00:35:45.551 --rc geninfo_unexecuted_blocks=1 00:35:45.551 00:35:45.551 ' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:45.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.551 --rc genhtml_branch_coverage=1 00:35:45.551 --rc genhtml_function_coverage=1 00:35:45.551 --rc genhtml_legend=1 00:35:45.551 --rc geninfo_all_blocks=1 00:35:45.551 --rc geninfo_unexecuted_blocks=1 00:35:45.551 00:35:45.551 ' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:45.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:45.551 17:02:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.913 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:50.914 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:50.914 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:50.914 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:50.915 Found net devices under 0000:31:00.0: cvl_0_0 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.915 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:50.916 Found net devices under 0000:31:00.1: cvl_0_1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:50.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:35:50.916 00:35:50.916 --- 10.0.0.2 ping statistics --- 00:35:50.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.916 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:35:50.916 00:35:50.916 --- 10.0.0.1 ping statistics --- 00:35:50.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.916 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.916 ************************************ 00:35:50.916 START TEST nvmf_target_disconnect_tc1 00:35:50.916 ************************************ 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:35:50.916 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:50.916 [2024-12-06 17:02:39.443414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:50.917 [2024-12-06 17:02:39.443463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1460e30 with addr=10.0.0.2, port=4420 00:35:50.917 [2024-12-06 17:02:39.443483] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:35:50.917 [2024-12-06 17:02:39.443492] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:50.917 [2024-12-06 17:02:39.443498] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:35:50.917 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:35:50.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:35:50.917 Initializing NVMe Controllers 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:50.917 00:35:50.917 real 0m0.087s 00:35:50.917 user 0m0.032s 00:35:50.917 sys 0m0.054s 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:35:50.917 ************************************ 00:35:50.917 END TEST nvmf_target_disconnect_tc1 00:35:50.917 ************************************ 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:50.917 ************************************ 00:35:50.917 START TEST nvmf_target_disconnect_tc2 00:35:50.917 ************************************ 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2515495 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2515495 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2515495 ']' 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:50.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:50.917 17:02:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:50.917 [2024-12-06 17:02:39.539746] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:50.917 [2024-12-06 17:02:39.539795] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:51.200 [2024-12-06 17:02:39.623536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:51.200 [2024-12-06 17:02:39.642417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:51.200 [2024-12-06 17:02:39.642449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:51.200 [2024-12-06 17:02:39.642458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:51.200 [2024-12-06 17:02:39.642465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:51.200 [2024-12-06 17:02:39.642471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:51.200 [2024-12-06 17:02:39.643985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:51.200 [2024-12-06 17:02:39.644151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:51.200 [2024-12-06 17:02:39.644445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:51.200 [2024-12-06 17:02:39.644445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.768 Malloc0 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.768 [2024-12-06 17:02:40.371575] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.768 [2024-12-06 17:02:40.399816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.768 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2515719 00:35:51.769 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:35:51.769 17:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:54.341 17:02:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2515495 00:35:54.341 17:02:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Write completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Write completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Write completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Write completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Write completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Write completed with error (sct=0, sc=8) 00:35:54.341 starting I/O failed 00:35:54.341 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Write completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Write completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Read completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 Write completed with error (sct=0, sc=8) 00:35:54.342 starting I/O failed 00:35:54.342 [2024-12-06 17:02:42.427461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:54.342 [2024-12-06 17:02:42.427760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.427784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.427973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.427981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.428387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.428417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.428769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.428780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.429010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.429019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.429517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.429546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.429870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.429881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.430289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.430319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.430649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.430659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.431005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.431014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.431239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.431248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.431567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.431577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.431748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.431758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.431936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.431946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.432259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.432268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.432612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.432622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.432957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.432966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.433128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.433137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.433471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.433480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.433788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.433798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.434125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.434134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.434471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.434480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.434761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.434770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.435071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.435080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.435412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.435421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.435719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.435728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.436013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.436022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.436320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.436329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.436646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.436655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.436825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.436834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.437145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.342 [2024-12-06 17:02:42.437154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.342 qpair failed and we were unable to recover it. 00:35:54.342 [2024-12-06 17:02:42.437250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.437258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.437580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.437589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.437789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.437798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.438125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.438135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.438333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.438342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.438714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.438723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.439017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.439026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.439221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.439232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.439537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.439547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.439851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.439860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.440151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.440160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.440444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.440453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.440739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.440747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.441034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.441043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.441313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.441323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.441611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.441620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.441803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.441813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.442110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.442120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.442315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.442324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.442612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.442620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.442978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.442988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.443310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.443319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.443603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.443612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.443868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.443877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.444153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.444162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.444474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.444483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.444775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.444785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.445079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.445088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.445397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.445407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.445708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.445717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.445994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.446003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.446310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.446319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.446630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.446639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.446961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.446970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.447279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.447289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.447568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.447578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.447894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.447903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.343 qpair failed and we were unable to recover it. 00:35:54.343 [2024-12-06 17:02:42.448074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.343 [2024-12-06 17:02:42.448085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.448405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.448414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.448625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.448633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.449059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.449067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.449373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.449382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.449673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.449682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.449986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.449994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.450194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.450203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.450502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.450510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.450803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.450811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.451096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.451107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.451426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.451436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.451718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.451727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.452037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.452047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.452366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.452374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.452679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.452687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.452963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.452971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.453318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.453326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.453657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.453665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.453962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.453970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.454284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.454292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.454602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.454611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.454907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.454914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.455206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.455216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.455515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.455524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.455785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.455794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.455976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.455985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.456275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.456285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.456586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.456595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.456880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.456889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.457198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.457207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.457512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.457520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.457811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.457819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.458112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.458120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.458445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.458453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.458741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.458748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.459043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.459051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.459354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.344 [2024-12-06 17:02:42.459364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.344 qpair failed and we were unable to recover it. 00:35:54.344 [2024-12-06 17:02:42.459657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.459665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.459863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.459871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.460181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.460191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.460398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.460407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.460694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.460702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.461019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.461027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.461186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.461194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.461464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.461473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.461762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.461771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.462068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.462076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.462419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.462428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.462736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.462745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.463048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.463058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.463389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.463397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.463743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.463751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.464058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.464067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.464384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.464394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.464686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.464695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.465000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.465010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.465325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.465334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.465618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.465627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.465946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.465954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.466254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.466262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.466582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.466591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.466897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.466905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.467143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.467151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.467444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.467452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.467744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.467752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.468051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.468059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.468356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.468364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.468650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.468658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.468848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.468856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.469141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.469150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.469440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.469448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.469743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.469751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.470056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.470065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.470377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.470385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.470673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.345 [2024-12-06 17:02:42.470681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.345 qpair failed and we were unable to recover it. 00:35:54.345 [2024-12-06 17:02:42.470987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.470996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.471179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.471189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.471514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.471522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.471820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.471828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.472131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.472140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.472496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.472504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.472860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.472868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.473162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.473170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.473467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.473475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.473672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.473680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.473940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.473949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.474251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.474259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.474575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.474583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.474882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.474890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.475060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.475068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.475270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.475278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.475581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.475589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.475886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.475895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.476230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.476239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.476423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.476431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.476711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.476720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.477026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.477034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.477341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.477349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.477657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.477665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.477963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.477971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.478250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.478259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.478545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.478553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.478835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.478844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.479131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.479140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.479318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.479325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.479636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.479645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.479835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.346 [2024-12-06 17:02:42.479844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.346 qpair failed and we were unable to recover it. 00:35:54.346 [2024-12-06 17:02:42.480132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.480141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.480465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.480473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.480800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.480809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.481106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.481116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.481408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.481416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.481713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.481722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.482012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.482021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.482323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.482332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.482492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.482501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.482831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.482842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.483186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.483196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.483502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.483511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.483795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.483804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.484102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.484112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.484419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.484428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.484714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.484722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.485033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.485042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.485281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.485289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.485581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.485589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.485861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.485869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.486165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.486174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.486495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.486503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.486857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.486866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.487207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.487215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.487617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.487625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.487950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.487959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.488274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.488282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.488585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.488594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.488884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.488892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.489183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.489191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.489497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.489505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.489786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.489794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.490117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.490125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.490414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.490423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.490714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.490723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.490903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.490911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.491207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.491215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.347 [2024-12-06 17:02:42.491518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.347 [2024-12-06 17:02:42.491527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.347 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.491809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.491817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.492105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.492113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.492382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.492392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.492676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.492684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.492959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.492967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.493257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.493265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.493550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.493558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.493864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.493873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.494146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.494154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.494451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.494459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.494759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.494768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.495049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.495059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.495318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.495326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.495621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.495629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.495925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.495933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.496225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.496234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.496527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.496535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.496864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.496873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.497164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.497172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.497486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.497494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.497797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.497806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.498082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.498090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.498383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.498393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.498699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.498708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.498900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.498908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.499211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.499220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.499510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.499518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.499809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.499817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.500114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.500123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.500414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.500422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.500736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.500744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.501035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.501043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.501207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.501217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.501511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.501519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.501814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.501823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.502122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.502131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.502434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.502442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.348 [2024-12-06 17:02:42.502733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.348 [2024-12-06 17:02:42.502741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.348 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.503090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.503102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.503390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.503399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.503708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.503716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.503997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.504005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.504167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.504176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.504464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.504473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.504782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.504791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.505079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.505089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.505388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.505396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.505693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.505701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.505989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.505998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.506276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.506285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.506599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.506607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.506902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.506912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.507269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.507279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.507558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.507566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.507858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.507866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.508145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.508155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.508485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.508493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.508770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.508778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.509073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.509081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.509425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.509433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.509724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.509732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.510025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.510033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.510335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.510343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.510692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.510702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.510995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.511004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.511301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.511311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.511608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.511616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.511918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.511927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.512245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.512254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.512564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.512572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.512855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.512863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.513163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.513171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.513462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.513470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.349 [2024-12-06 17:02:42.513765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.349 [2024-12-06 17:02:42.513773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.349 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.514050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.514058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.514243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.514251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.514564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.514572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.514854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.514862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.515206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.515214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.515519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.515527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.515808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.515817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.516096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.516107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.516425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.516433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.516722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.516730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.517028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.517038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.517360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.517368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.517667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.517675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.517968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.517977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.518264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.518272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.518569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.518578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.518871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.518879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.519171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.519181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.519483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.519491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.519781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.519789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.520078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.520086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.520380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.520388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.520682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.520690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.520981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.520990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.521273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.521282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.521585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.521593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.521877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.521884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.522180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.522188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.522492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.522500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.522812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.522820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.523109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.523117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.523426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.523434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.523733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.523742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.524030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.524039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.524337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.350 [2024-12-06 17:02:42.524346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.350 qpair failed and we were unable to recover it. 00:35:54.350 [2024-12-06 17:02:42.524509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.524519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.524841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.524850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.525142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.525151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.525456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.525464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.525805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.525813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.526010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.526018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.526210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.526218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.526519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.526527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.526872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.526881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.527170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.527179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.527463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.527471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.527774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.527782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.528070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.528078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.528373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.528381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.528712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.528721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.529008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.529016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.529347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.529356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.529646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.529654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.529944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.529952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.530236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.530244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.530536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.530544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.530837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.530845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.531137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.531147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.531470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.531478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.531637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.531645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.531959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.531968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.532256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.532264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.532570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.532579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.532870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.532878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.533166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.533174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.533451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.533459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.533758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.533766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.534047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.351 [2024-12-06 17:02:42.534055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.351 qpair failed and we were unable to recover it. 00:35:54.351 [2024-12-06 17:02:42.534371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.534379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.534700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.534708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.534996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.535005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.535297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.535305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.535605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.535891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.535900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.536195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.536203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.536504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.536512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.536807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.537105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.537114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.537474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.537482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.537774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.537782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.538134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.538143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.538456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.538465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.538771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.538779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.539069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.539077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.539387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.539395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.539690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.539698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.540004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.540012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.540270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.540278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.540580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.540588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.540885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.540893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.541139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.541147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.541438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.541446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.541739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.541747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.542028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.542037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.542301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.542310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.542625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.542633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.542928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.542936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.543235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.543245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.543537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.543546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.543837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.543845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.352 qpair failed and we were unable to recover it. 00:35:54.352 [2024-12-06 17:02:42.544152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.352 [2024-12-06 17:02:42.544160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.544470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.544479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.544779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.544788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.545074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.545082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.545379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.545387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.545678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.545686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.545975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.545982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.546314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.546323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.546627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.546637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.546924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.546932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.547269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.547278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.547465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.547473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.547694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.547702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.547983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.547992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.548309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.548317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.548611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.548620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.548915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.548922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.549130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.549138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.549440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.549449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.549741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.549749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.550035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.550044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.550337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.550345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.550642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.550650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.550959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.550967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.551259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.551267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.551555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.551563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.551882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.551890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.552170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.552178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.552512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.552519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.552816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.552824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.553108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.553117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.553442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.553451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.553750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.553758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.553914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.553923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.554236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.554246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.554539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.554547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.554831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.554839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.353 [2024-12-06 17:02:42.555136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.353 [2024-12-06 17:02:42.555146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.353 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.555443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.555451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.555750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.555758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.556056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.556065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.556364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.556372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.556534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.556542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.556848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.556858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.557042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.557050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.557343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.557351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.557546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.557553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.557846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.557854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.558142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.558150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.558437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.558446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.558725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.558733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.559055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.559063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.559353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.559361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.559710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.559718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.560030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.560038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.560328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.560337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.560624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.560632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.560831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.560839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.561095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.561106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.561392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.561400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.561701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.561709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.561904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.561912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.562218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.562226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.562516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.562524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.562820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.562828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.563017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.563025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.563232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.563240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.563549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.563558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.563764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.563772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.564084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.564093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.564381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.564390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.564673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.564681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.564961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.564969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.565262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.565271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.565562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.354 [2024-12-06 17:02:42.565571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.354 qpair failed and we were unable to recover it. 00:35:54.354 [2024-12-06 17:02:42.565850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.565858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.566231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.566507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.566517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.566795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.566803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.567088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.567096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.567373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.567381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.567667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.567676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.567967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.567977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.568266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.568274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.568578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.568586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.568853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.568861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.569161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.569169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.569502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.569510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.569794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.569802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.570091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.570101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.570391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.570399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.570685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.570693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.570965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.570973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.571257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.571266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.571561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.571571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.571864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.571873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.572163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.572172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.572458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.572466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.572649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.572657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.572941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.572950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.573243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.573251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.573540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.573548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.573848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.573856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.574153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.574161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.574476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.574484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.574780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.574788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.575071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.575078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.575367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.355 [2024-12-06 17:02:42.575376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.355 qpair failed and we were unable to recover it. 00:35:54.355 [2024-12-06 17:02:42.575657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.575665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.575961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.575969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.576262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.576270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.576591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.576599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.576888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.576896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.577209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.577217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.577522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.577530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.577818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.577826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.578007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.578015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.578324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.578335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.578623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.578632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.578790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.578798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.578987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.578995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.579278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.579286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.579614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.579622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.579914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.579922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.580233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.580241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.580543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.580552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.580849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.580857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.581146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.581154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.581463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.581472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.581769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.581777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.582063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.582071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.582361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.582369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.582700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.582708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.583026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.583034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.583328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.583337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.583635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.583643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.583924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.583932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.584164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.584172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.584452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.584460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.584751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.584759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.585050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.585058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.585368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.585377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.585671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.585679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.356 qpair failed and we were unable to recover it. 00:35:54.356 [2024-12-06 17:02:42.585974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.356 [2024-12-06 17:02:42.585982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.586276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.586284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.586599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.586607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.586896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.586904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.587191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.587199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.587513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.587521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.587790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.587798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.588109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.588118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.588402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.588410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.588705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.588713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.589014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.589022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.589228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.589236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.589388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.589396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.589723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.589731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.590024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.590034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.590366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.590375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.590674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.590682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.591019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.591028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.591217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.591225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.591535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.591543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.591834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.591842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.592133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.592141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.592540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.592548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.592841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.592850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.593133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.593141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.593444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.593452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.593730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.593738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.594028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.594036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.594341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.594350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.594635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.594643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.594931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.594938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.595143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.595152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.595348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.595357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.595634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.595642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.595925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.595934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.596136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.357 [2024-12-06 17:02:42.596146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.357 qpair failed and we were unable to recover it. 00:35:54.357 [2024-12-06 17:02:42.596439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.596448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.596791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.596801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.597078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.597086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.597391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.597400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.597686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.597694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.598016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.598024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.598312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.598320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.598632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.598640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.598983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.598991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.599284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.599292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.599579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.599587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.599880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.599889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.600183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.600191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.600490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.600498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.600800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.600808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.601110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.601118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.601329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.601337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.601645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.601653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.601934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.601943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.602221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.602229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.602513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.602521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.602798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.602806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.603104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.603113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.603391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.603399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.603688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.603698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.604016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.604026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.604327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.604335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.604637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.604646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.604937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.604945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.605244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.605253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.605554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.605562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.605857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.605865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.606156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.606165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.606454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.606463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.606763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.606771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.607063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.607072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.607373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.607382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.607671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.358 [2024-12-06 17:02:42.607679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.358 qpair failed and we were unable to recover it. 00:35:54.358 [2024-12-06 17:02:42.607972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.607981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.608343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.608352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.608647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.608656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.608948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.608956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.609263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.609271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.609613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.609621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.609916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.609924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.610225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.610235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.610508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.610517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.610805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.610813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.611098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.611110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.611406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.611415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.611715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.611723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.612004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.612013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.612172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.612181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.612470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.612479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.612762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.612770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.613050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.613059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.613244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.613252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.613556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.613564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.613866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.613874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.614159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.614167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.614425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.614433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.614720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.614728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.615006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.615014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.615300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.615308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.615594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.615602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.615907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.615915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.616215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.616223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.616511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.616519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.616811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.616819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.617115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.617123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.617401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.617409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.359 [2024-12-06 17:02:42.617705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.359 [2024-12-06 17:02:42.617715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.359 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.618009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.618016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.618310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.618318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.618620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.618628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.618967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.618976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.619302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.619310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.619631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.619639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.619943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.619951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.620242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.620250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.620545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.620552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.620837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.620845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.621144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.621152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.621501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.621510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.621815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.621824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.622125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.622135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.622473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.622483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.622774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.622782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.622952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.622960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.623146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.623154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.623470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.623478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.623814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.623823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.624122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.624131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.624413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.624421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.624715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.624723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.625007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.625015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.625204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.625213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.625471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.625479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.625795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.625803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.626141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.626150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.626420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.626428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.626710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.626720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.627013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.627021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.627300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.360 [2024-12-06 17:02:42.627308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.360 qpair failed and we were unable to recover it. 00:35:54.360 [2024-12-06 17:02:42.627611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.627619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.627911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.627919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.628206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.628214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.628564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.628573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.628854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.628862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.629150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.629159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.629351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.629359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.629685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.629694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.630041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.630050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.630347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.630356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.630687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.630695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.630983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.630991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.631266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.631274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.631576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.631584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.631875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.631883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.632175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.632184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.632470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.632478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.632779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.632787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.633071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.633080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.633357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.633365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.633656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.633664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.633971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.633981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.634266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.634276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.634578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.634587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.634887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.634896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.635208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.635216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.635535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.635544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.635851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.635859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.636169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.636177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.636478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.636486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.636784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.636792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.637085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.361 [2024-12-06 17:02:42.637093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.361 qpair failed and we were unable to recover it. 00:35:54.361 [2024-12-06 17:02:42.637401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.637410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.637738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.637746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.638066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.638075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.638377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.638386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.638684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.638693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.638984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.638992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.639290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.639299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.639595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.639603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.639899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.639906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.640191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.640199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.640521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.640529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.640819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.640827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.641109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.641117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.641419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.641427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.641726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.641734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.642024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.642032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.642341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.642349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.642669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.642677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.643018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.643026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.643332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.643341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.643637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.643646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.643930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.643939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.644134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.644143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.644447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.644455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.644649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.644657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.644993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.645001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.645410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.645418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.645712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.645720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.646014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.646022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.646337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.646347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.646660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.646668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.646968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.646976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.647259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.647267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.647559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.647567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.647860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.647868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.648153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.648163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.648501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.648509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.362 [2024-12-06 17:02:42.648846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.362 [2024-12-06 17:02:42.648855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.362 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.649132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.649142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.649444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.649453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.649740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.649748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.650045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.650054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.650275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.650284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.650571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.650579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.650851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.650860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.651135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.651144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.651438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.651447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.651745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.651754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.652029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.652037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.652345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.652353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.652636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.652644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.652947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.652955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.653113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.653122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.653335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.653343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.653648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.653656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.653927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.653935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.654237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.654246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.654524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.654533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.654847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.654855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.655138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.655147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.655449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.655458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.655739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.655748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.656031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.656040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.656341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.656349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.656632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.656640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.656931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.656940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.657248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.657256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.657622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.657630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.657812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.657820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.658126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.658137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.658461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.658470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.658758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.658767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.659070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.659078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.659378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.363 [2024-12-06 17:02:42.659387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.363 qpair failed and we were unable to recover it. 00:35:54.363 [2024-12-06 17:02:42.659673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.659682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.659999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.660008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.660150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.660158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.660491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.660499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.660818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.660827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.661147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.661156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.661458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.661466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.661792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.661801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.662137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.662146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.662430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.662440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.662649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.662658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.662947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.662956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.663239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.663248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.663523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.663532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.663824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.663833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.664110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.664119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.664395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.664403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.664696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.664705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.664883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.664892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.665198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.665206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.665386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.665394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.665700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.665709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.666073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.666082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.666346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.666355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.666647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.666655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.666930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.666938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.667280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.667290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.667593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.667601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.667890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.667898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.668200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.668209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.668596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.668604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.668886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.668895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.669183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.669193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.669500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.669509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.669855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.669864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.364 qpair failed and we were unable to recover it. 00:35:54.364 [2024-12-06 17:02:42.670159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.364 [2024-12-06 17:02:42.670170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.670527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.670535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.670709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.670718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.670879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.670888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.671159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.671167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.671460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.671469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.671754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.671762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.672047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.672056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.672350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.672359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.672653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.672660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.672944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.672953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.673244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.673253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.673541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.673550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.673831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.673840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.674123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.674132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.674423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.674431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.674734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.674742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.675115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.675124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.675419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.675426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.675709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.675718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.676004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.676012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.676389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.676398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.676721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.676729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.677006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.677015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.677306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.677315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.677592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.677601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.677799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.677810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.678021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.678030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.678361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.678370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.678679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.678688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.678989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.678998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.365 [2024-12-06 17:02:42.679313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.365 [2024-12-06 17:02:42.679322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.365 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.679527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.679536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.679818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.679827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.680161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.680170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.680442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.680449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.680762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.680771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.681074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.681082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.681386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.681395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.681691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.681699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.682000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.682010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.682390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.682400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.682667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.682677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.682988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.683000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.683311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.683320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.683605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.683614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.683895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.683904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.684191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.684200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.684520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.684529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.684844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.684853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.685177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.685188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.685479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.685487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.685761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.685770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.686088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.686097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.686288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.686297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.686619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.686629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.686934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.686943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.687246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.687255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.687559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.687568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.687869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.687878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.688164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.688173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.688475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.688484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.688774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.688785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.689096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.689110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.689402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.689411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.689700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.689708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.690019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.690028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.690230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.690239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.690541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.366 [2024-12-06 17:02:42.690551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.366 qpair failed and we were unable to recover it. 00:35:54.366 [2024-12-06 17:02:42.690854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.690864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.691174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.691183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.691484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.691493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.691787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.691796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.692097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.692112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.692288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.692297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.692595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.692604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.692919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.692928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.693130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.693148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.693474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.693483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.693809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.693817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.694115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.694127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.694415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.694425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.694722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.694731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.695026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.695035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.695290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.695300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.695602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.695611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.695849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.695858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.696154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.696163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.696470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.696480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.696813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.696822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.697112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.697121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.697408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.697417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.697714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.697722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.698014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.698024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.698325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.698334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.698628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.698636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.698922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.698931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.699241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.699251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.699599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.699889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.699897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.700186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.700194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.700494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.700502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.700790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.700800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.701115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.701125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.701418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.701427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.701752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.701761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.702105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.367 [2024-12-06 17:02:42.702114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.367 qpair failed and we were unable to recover it. 00:35:54.367 [2024-12-06 17:02:42.702416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.702426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.702721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.702730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.702900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.702908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.703219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.703228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.703535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.703544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.703851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.704151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.704160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.704439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.704448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.704616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.704624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.704957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.704967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.705258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.705270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.705630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.705638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.705943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.705951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.706249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.706260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.706564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.706573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.706859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.706869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.707160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.707169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.707488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.707500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.707783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.707792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.708179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.708189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.708525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.708536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.708825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.708834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.709158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.709166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.709476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.709485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.709802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.709811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.710119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.710128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.710453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.710461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.710754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.710764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.711073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.711081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.711384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.711394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.711748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.711757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.712062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.712071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.712373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.712382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.712584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.712593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.712903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.712911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.713108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.713117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.713417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.713428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.713733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.368 [2024-12-06 17:02:42.713741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.368 qpair failed and we were unable to recover it. 00:35:54.368 [2024-12-06 17:02:42.714042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.714050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.714342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.714351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.714521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.714530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.714821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.714830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.715138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.715147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.715491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.715500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.715663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.715673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.715984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.715992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.716315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.716325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.716626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.716634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.716930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.716939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.717257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.717267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.717578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.717587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.717880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.717889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.718044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.718053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.718397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.718408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.718698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.718707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.719008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.719017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.719301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.719313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.719653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.719662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.719956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.719964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.720265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.720274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.720624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.720633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.720939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.720947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.721307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.721316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.721666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.721674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.721983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.721991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.722327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.722335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.722629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.722639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.722797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.722806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.723098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.723111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.723316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.723328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.723612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.723621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.723914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.723922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.724107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.724116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.724417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.724426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.724737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.724747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.725032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.369 [2024-12-06 17:02:42.725041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.369 qpair failed and we were unable to recover it. 00:35:54.369 [2024-12-06 17:02:42.725348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.725357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.725650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.725662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.725973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.725981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.726319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.726329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.726634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.726643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.726944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.726955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.727260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.727269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.727660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.727668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.727978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.727987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.728277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.728285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.728652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.728661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.728964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.728973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.729320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.729328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.729641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.729649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.729958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.729966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.730276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.730285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.730418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.730428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.730689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.730699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.731044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.731053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.731341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.731349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.731652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.731660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.731941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.731949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.732146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.732154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.732426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.732434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.732743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.732751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.733031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.733040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.733329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.733337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.733649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.733658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.733789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.733798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.734013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.734022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.734299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.734307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.734601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.734610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.734900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.370 [2024-12-06 17:02:42.734909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.370 qpair failed and we were unable to recover it. 00:35:54.370 [2024-12-06 17:02:42.735198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.735206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.735490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.735498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.735671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.735679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.735971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.735979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.736214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.736222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.736514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.736522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.736824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.736831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.737115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.737123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.737426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.737434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.737699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.737707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.737993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.738001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.738193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.738202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.738512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.738519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.738794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.738803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.739096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.739112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.739403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.739411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.739699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.739707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.739982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.739990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.740258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.740266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.740557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.740565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.740815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.740823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.741145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.741154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.741483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.741491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.741784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.741792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.742080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.742090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.742417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.742425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.742721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.742729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.743042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.743050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.743333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.743341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.743627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.743635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.743918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.743927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.744207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.744216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.744506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.744514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.744760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.744768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.745071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.745080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.745394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.745403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.745724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.745732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.746022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.371 [2024-12-06 17:02:42.746030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.371 qpair failed and we were unable to recover it. 00:35:54.371 [2024-12-06 17:02:42.746230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.746238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.746541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.746548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.746766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.746774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.747031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.747039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.747335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.747344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.747646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.747654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.747949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.747957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.748310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.748318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.748608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.748616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.748898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.748906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.749197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.749205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.749539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.749547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.749851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.749859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.750140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.750149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.750436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.750444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.750750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.750758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.751072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.751080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.751385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.751394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.751672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.751681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.751965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.751973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.752283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.752291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.752580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.752589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.752869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.752878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.753149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.753157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.753507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.753515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.753797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.753805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.753966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.753975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.754281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.754290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.754570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.754578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.754872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.754880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.755097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.755108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.755389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.755397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.755687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.755695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.755987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.755995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.756318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.756327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.756667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.756676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.756965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.756974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.757261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.372 [2024-12-06 17:02:42.757269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.372 qpair failed and we were unable to recover it. 00:35:54.372 [2024-12-06 17:02:42.757570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.757578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.757870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.757878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.758157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.758166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.758346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.758356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.758680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.758688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.758990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.758998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.759274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.759282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.759572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.759581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.759886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.759894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.760196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.760205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.760513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.760521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.760812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.760820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.761174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.761182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.761462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.761470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.761770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.761779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.762117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.762128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.762413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.762422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.762705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.762713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.763003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.763012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.763248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.763257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.763563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.763572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.763841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.763851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.764131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.764140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.764420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.764430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.764724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.764732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.765075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.765084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.765311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.765320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.765632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.765641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.765940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.765949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.766229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.766238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.766519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.766527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.766802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.766809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.767152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.767161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.767463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.767472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.767798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.767807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.768125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.768133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.768410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.768418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.373 [2024-12-06 17:02:42.768704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.373 [2024-12-06 17:02:42.768712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.373 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.768994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.769003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.769309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.769318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.769596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.769604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.769888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.769897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.770227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.770236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.770537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.770546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.770854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.770863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.771139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.771148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.771443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.771451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.771730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.771738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.772016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.772025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.772327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.772336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.772629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.772637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.772934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.772942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.773243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.773251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.773539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.773547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.773886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.773895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.774172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.774182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.774515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.774533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.774847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.774855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.775152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.775160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.775474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.775481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.775762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.775771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.776056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.776064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.776353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.776651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.776660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.776936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.776944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.777236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.777244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.777521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.777529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.777815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.777823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.778115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.778123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.778410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.778418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.374 [2024-12-06 17:02:42.778600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.374 [2024-12-06 17:02:42.778609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.374 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.778789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.778798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.779004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.779012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.779349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.779357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.779537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.779544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.779844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.779852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.780154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.780163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.780474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.780482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.780783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.780790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.781079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.781087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.781368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.781376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.781665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.781673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.781956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.781964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.782312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.782320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.782630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.782639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.782840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.782849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.783154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.783163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.783511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.783519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.783830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.783838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.784133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.784142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.784293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.784301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.784623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.784631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.784908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.784916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.785196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.785204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.785537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.785545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.785826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.785837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.786117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.786126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.786318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.786326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.786637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.786645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.786941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.786950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.787236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.787245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.787523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.787531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.787815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.787823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.788132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.788140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.788438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.788446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.788724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.788733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.789040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.789048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.789424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.789434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.375 [2024-12-06 17:02:42.789712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.375 [2024-12-06 17:02:42.789721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.375 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.790010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.790018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.790329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.790338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.790621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.790629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.790932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.790941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.791220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.791228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.791531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.791540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.791852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.791859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.792150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.792158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.792498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.792508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.792794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.792803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.793098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.793109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.793414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.793422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.793716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.793725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.793926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.793934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.794281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.794290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.794453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.794462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.794761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.794770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.795066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.795074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.795382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.795391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.795684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.795692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.795976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.795984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.796183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.796192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.796476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.796485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.796775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.796783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.797068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.797076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.797405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.797413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.797712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.797722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.797917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.797926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.798224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.798232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.798534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.798543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.798812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.798820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.799106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.799114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.799423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.799431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.799731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.799739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.800024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.800032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.800326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.800334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.800613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.800621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.376 qpair failed and we were unable to recover it. 00:35:54.376 [2024-12-06 17:02:42.800910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.376 [2024-12-06 17:02:42.800919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.801195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.801203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.801497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.801506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.801701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.801709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.801982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.801991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.802260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.802269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.802543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.802551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.802831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.802841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.803131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.803139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.803434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.803442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.803716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.803724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.804040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.804049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.804346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.804354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.804635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.804643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.804922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.804930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.805240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.805248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.805540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.805548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.805844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.805852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.806161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.806170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.806440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.806448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.806734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.806741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.807051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.807059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.807423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.807432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.807716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.807725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.808025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.808033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.808348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.808357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.808653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.808661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.808945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.808953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.809258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.809267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.809624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.809634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.809946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.809955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.810249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.810541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.810549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.810850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.810858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.811158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.811166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.811473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.811481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.811772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.811780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.812063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.377 [2024-12-06 17:02:42.812071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.377 qpair failed and we were unable to recover it. 00:35:54.377 [2024-12-06 17:02:42.812223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.812231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.812624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.812633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.812937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.812947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.813233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.813242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.813528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.813535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.813839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.813847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.814143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.814151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.814466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.814475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.814757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.814766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.815069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.815078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.815374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.815383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.815659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.815667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.815998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.816006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.816277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.816286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.816588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.816597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.816879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.816889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.817170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.817179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.817464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.817472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.817754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.817763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.818044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.818052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.818340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.818348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.818624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.818632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.818914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.818922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.819210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.819218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.819539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.819547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.819821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.819830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.820108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.820118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.820436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.820445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.820721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.820729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.821005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.821013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.821305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.821313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.821594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.821604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.821884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.821892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.822167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.822175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.822482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.822490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.822767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.822775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.823090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.823099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.823414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.378 [2024-12-06 17:02:42.823423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.378 qpair failed and we were unable to recover it. 00:35:54.378 [2024-12-06 17:02:42.823704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.823712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.823994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.824002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.824312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.824320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.824612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.824621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.824926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.824935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.825229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.825238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.825527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.825535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.825855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.825863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.826156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.826164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.826493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.826502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.826801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.826809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.827098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.827109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.827401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.827409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.827704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.827712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.827997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.828005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.828294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.828303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.828621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.828630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.828914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.828923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.829241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.829249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.829559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.829858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.829866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.830029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.830037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.830343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.830352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.830644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.830652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.830835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.830843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.831169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.831177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.831366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.831374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.831696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.831705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.832005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.832013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.832319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.832328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.832628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.832636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.832920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.832929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.833228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.379 [2024-12-06 17:02:42.833237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.379 qpair failed and we were unable to recover it. 00:35:54.379 [2024-12-06 17:02:42.833550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.833560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.833860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.833869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.834218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.834226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.834529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.834537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.834824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.834832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.835135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.835143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.835433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.835441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.835777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.835785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.835978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.835985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.836264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.836272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.836554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.836563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.836894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.836903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.837234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.837242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.837563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.837572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.837856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.837864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.838145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.838153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.838448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.838456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.838756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.838764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.839044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.839052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.839343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.839351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.839651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.839659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.839952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.839959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.840256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.840265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.840543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.840552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.840831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.840838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.841132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.841140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.841449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.841457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.841739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.841748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.842025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.842034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.842314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.842324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.842605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.842614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.842788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.842797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.843111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.843120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.843411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.843419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.843726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.843734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.844030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.844038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.844331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.844340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.844625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.844633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.380 qpair failed and we were unable to recover it. 00:35:54.380 [2024-12-06 17:02:42.844914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.380 [2024-12-06 17:02:42.844922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.845221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.845230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.845520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.845531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.845809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.845817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.846146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.846155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.846492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.846500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.846801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.846809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.847098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.847109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.847430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.847437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.847781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.847790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.848091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.848098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.848415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.848423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.848717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.848724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.848935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.848943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.849255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.849264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.849570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.849578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.849860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.849868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.850165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.850174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.850454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.850462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.850739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.850748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.851052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.851061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.851241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.851250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.851493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.851500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.851796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.851805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.852079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.852087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.852389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.852399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.852675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.852684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.852876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.852885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.853157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.853165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.853359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.853368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.853672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.853681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.853992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.854000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.854317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.854325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.854625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.854634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.854912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.854921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.855198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.855207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.855486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.855494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.855724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.381 [2024-12-06 17:02:42.855733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.381 qpair failed and we were unable to recover it. 00:35:54.381 [2024-12-06 17:02:42.856051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.856059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.856377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.856386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.856664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.856672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.856958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.856966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.857329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.857338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.857614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.857622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.857804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.857813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.858086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.858094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.858393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.858401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.858688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.858696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.859021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.859031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.859331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.859339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.859625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.859633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.859921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.859930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.860230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.860238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.860557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.860566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.860904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.860913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.861221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.861230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.861510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.861518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.861799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.861807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.862166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.862174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.862486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.862494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.862793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.862801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.863122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.863130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.863435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.863443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.863724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.863732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.864012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.864020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.864308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.864317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.864592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.864600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.864877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.864885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.865213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.865222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.865406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.865415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.865704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.865714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.865869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.865878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.866189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.866197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.866537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.866545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.866841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.866849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.382 [2024-12-06 17:02:42.867131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.382 [2024-12-06 17:02:42.867140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.382 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.867300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.867309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.867640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.867649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.867937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.867946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.868242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.868250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.868544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.868552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.868864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.868872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.869166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.869176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.869478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.869487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.869663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.869671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.869982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.869990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.870139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.870148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.870363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.870372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.870694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.870704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.870999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.871007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.871295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.871304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.871585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.871594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.871882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.871891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.872176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.872184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.872496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.872504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.872821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.872830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.873128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.873137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.873402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.873410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.873717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.873726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.874005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.874013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.874229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.874237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.874552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.874560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.874861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.874869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.875156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.875165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.875477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.875486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.875767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.875776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.876063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.876071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.876247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.876256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.876592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.876600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.876913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.876922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.383 qpair failed and we were unable to recover it. 00:35:54.383 [2024-12-06 17:02:42.877082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.383 [2024-12-06 17:02:42.877091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.877393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.877401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.877683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.877692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.877873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.877881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.878200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.878208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.878495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.878504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.878777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.878785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.879059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.879068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.879403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.879411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.879689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.879697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.879997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.880006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.880307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.880316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.880600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.880611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.880877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.880886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.881210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.881218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.881348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.881357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.881645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.881653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.881938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.881946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.882215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.882224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.882517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.882525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.882807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.882815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.883105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.883114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.883415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.883424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.883708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.883717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.884018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.884027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.884222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.884231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.884510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.884519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.884724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.884732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.885027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.885035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.885346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.885355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.885639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.885648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.886019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.886028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.886373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.886381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.886707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.886716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.887021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.887031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.384 [2024-12-06 17:02:42.887334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.384 [2024-12-06 17:02:42.887343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.384 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.887626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.887634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.887962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.887970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.888145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.888154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.888336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.888345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.888624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.888633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.888904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.888912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.889234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.889244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.889567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.889576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.889863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.889871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.890164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.890173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.890474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.890482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.890778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.890786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.891086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.891095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.891385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.891394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.891676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.891685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.891963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.891971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.892263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.892273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.892576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.892584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.892746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.892755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.893076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.893084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.893399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.893408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.893733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.893742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.894020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.894028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.894339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.894348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.894630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.894638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.894935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.894943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.895294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.895303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.895579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.895587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.895893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.895902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.896259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.896267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.896472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.896481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.896748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.896756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.897049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.897057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.897187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.897196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.897517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.897525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.897707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.897716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.898008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.898016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.898353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.898361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.898673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.898682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.898973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.898981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.385 [2024-12-06 17:02:42.899263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.385 [2024-12-06 17:02:42.899271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.385 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.899569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.899577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.899884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.899892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.900052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.900062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.900373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.900382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.900702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.900710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.900992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.901000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.901320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.901329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.901626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.901634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.901811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.901819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.902087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.902096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.902393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.902402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.902688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.902697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.902984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.902993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.903285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.903294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.903461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.903469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.903774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.903786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.904105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.904114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.904401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.904409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.904690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.904698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.905009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.905017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.905177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.905187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.905572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.905580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.905872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.905880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.906158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.906166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.906451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.906459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.906740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.906748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.907007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.907015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.907297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.907305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.907597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.907605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.907905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.907914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.908216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.908224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.908530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.908538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.908864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.908873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.909159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.909168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.909463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.909471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.909747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.909755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.910040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.910048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.910329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.910338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.910613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.910621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.910904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.910912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.911238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.386 [2024-12-06 17:02:42.911247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.386 qpair failed and we were unable to recover it. 00:35:54.386 [2024-12-06 17:02:42.911525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.911534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.911810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.911820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.912107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.912115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.912450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.912459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.912752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.912761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.913047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.913055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.913356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.913365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.913672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.913681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.913958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.913966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.914127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.914136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.914449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.914458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.914757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.914765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.915050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.915059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.915346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.915354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.915642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.915650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.915942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.915951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.916146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.916154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.916348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.916357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.916667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.916676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.916963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.916971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.917269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.917278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.917577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.917585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.917894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.917902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.918231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.918241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.918569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.918577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.918878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.918887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.919176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.919185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.919496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.919504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.919795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.919803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.920092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.920104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.920420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.920428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.920718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.920726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.921005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.921013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.921206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.921214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.921527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.921535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.921820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.921828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.922121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.922129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.922421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.922429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.922755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.922764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.923062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.923071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.387 [2024-12-06 17:02:42.923254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.387 [2024-12-06 17:02:42.923263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.387 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.923571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.923581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.923869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.923878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.924166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.924175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.924514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.924522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.924804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.924812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.925120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.925128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.925365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.925373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.925687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.925696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.925987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.925995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.926312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.926320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.926648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.926657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.926963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.926971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.927280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.927288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.927594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.927602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.927912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.927922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.928202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.928210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.928562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.928570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.928875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.928883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.929179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.929188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.929386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.929395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.929693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.929702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.929978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.929987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.930152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.930161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.930479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.930487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.930769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.930777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.931080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.931089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.931386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.931396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.931656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.931664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.931968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.931977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.932351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.932360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.932700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.932712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.933034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.933043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.933355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.933365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.388 [2024-12-06 17:02:42.933659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.388 [2024-12-06 17:02:42.933668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.388 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.933967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.933980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.934267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.934277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.934648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.934657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.934956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.934966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.935245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.935255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.935543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.935552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.935866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.935877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.936170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.936179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.936504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.936512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.936808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.936820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.937094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.937107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.937406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.937414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.937600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.937609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.937768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.937775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.937935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.937943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.938245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.938255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.938567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.938576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.938902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.938911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.939239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.939247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.939569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.939578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.939745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.939757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.939935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.939944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.940327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.940337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.940647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.940656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.940958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.389 [2024-12-06 17:02:42.940966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.389 qpair failed and we were unable to recover it. 00:35:54.389 [2024-12-06 17:02:42.941161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.941171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.941508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.941517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.941878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.941890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.942179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.942188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.942400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.942408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.942719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.942727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.943038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.943046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.943361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.943372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.943739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.943747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.944105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.944118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.944306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.944314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.944664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.944673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.944984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.944993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.945277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.945286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.945493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.945502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.945658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.945666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.945982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.945991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.946324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.946334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.946633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.946642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.946947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.946956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.947254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.947262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.947532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.947542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.947851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.947862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.948047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.948055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.948375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.948384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.948580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.948589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.948905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.948914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.949222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.949231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.949399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.390 [2024-12-06 17:02:42.949406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.390 qpair failed and we were unable to recover it. 00:35:54.390 [2024-12-06 17:02:42.949719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.949730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.949947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.949957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.950185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.950193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.950378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.950387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.950711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.950720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.951028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.951037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.951372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.951382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.951685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.951694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.951903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.951911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.952249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.952258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.952403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.952411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.952735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.952743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.953051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.953059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.953371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.953381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.953568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.953580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.953726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.953734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.953935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.953945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.954211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.954222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.954538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.954546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.954711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.954720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.955017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.955025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.955328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.955338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.955648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.955657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.955825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.955834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.956153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.956161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.956448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.956457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.956769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.956779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.956962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.956971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.957162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.391 [2024-12-06 17:02:42.957171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.391 qpair failed and we were unable to recover it. 00:35:54.391 [2024-12-06 17:02:42.957500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.957509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.957863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.957871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.958055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.958252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.958263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.958429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.958438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.958616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.958625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.958929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.958938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.959253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.959262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.959534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.959543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.959821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.959830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.960122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.960131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.960459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.960467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.960646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.960654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.960825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.960837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.961154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.961163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.961474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.961482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.961770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.961778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.962071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.962080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.962379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.962388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.962696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.962705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.963019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.963028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.963220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.963229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.963437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.963445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.963752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.963760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.964057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.964064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.964357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.964365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.964677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.964685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.964734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.964743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.392 qpair failed and we were unable to recover it. 00:35:54.392 [2024-12-06 17:02:42.964988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.392 [2024-12-06 17:02:42.964996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.965445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.965454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.965753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.965761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.966043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.966050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.966324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.966332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.966617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.966626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.966906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.966914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.967195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.967203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.967498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.967508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.967779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.967788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.968020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.968029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.968266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.968275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.968574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.968583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.968873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.968882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.969164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.969173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.969469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.969480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.969756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.969764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.970060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.970069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.970416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.970425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.970706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.970714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.971016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.971024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.971327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.971335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.971634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.971643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.971933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.971941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.972269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.972277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.972472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.972480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.972774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.972782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.973069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.973077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.973399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.973408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.973603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.393 [2024-12-06 17:02:42.973611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.393 qpair failed and we were unable to recover it. 00:35:54.393 [2024-12-06 17:02:42.973791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.973799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.974057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.974066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.974382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.974391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.974693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.974701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.974932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.974940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.975158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.975166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.975468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.975476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.975674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.975682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.975980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.975989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.976343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.976351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.976651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.976659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.976938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.976947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.977137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.977147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.977306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.977316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.977617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.977626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.977923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.977932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.978236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.978244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.978514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.978523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.394 qpair failed and we were unable to recover it. 00:35:54.394 [2024-12-06 17:02:42.978823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.394 [2024-12-06 17:02:42.978831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.979185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.979194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.979468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.979476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.979762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.979770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.980052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.980061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.980350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.980358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.980665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.980673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.980973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.980984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.981284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.981293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.981605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.981613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.981894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.981903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.982203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.982211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.982494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.982502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.982779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.982787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.983063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.983071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.983374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.983382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.983676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.983685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.983983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.983992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.984286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.984296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.984456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.984465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.984772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.984781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.984960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.984969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.985253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.985261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.985443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.985451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.985777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.985785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.986078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.986087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.986397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.986406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.986688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.986696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.395 [2024-12-06 17:02:42.986988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.395 [2024-12-06 17:02:42.986996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.395 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.987306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.987315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.987519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.987527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.987831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.987840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.988130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.988424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.988433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.988611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.988621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.988902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.988910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.989216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.989224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.989536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.989544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.989840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.989848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.990135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.990144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.990445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.990455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.990753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.990762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.990950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.990958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.991266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.991275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.991565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.991574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.991877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.991886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.992181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.992190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.992487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.992497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.992771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.992780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.993085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.993093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.993383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.993392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.993738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.993746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.994026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.994034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.994387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.994397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.994694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.994702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.994991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.994999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.995278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.995286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.396 [2024-12-06 17:02:42.995599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.396 [2024-12-06 17:02:42.995609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.396 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.995922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.995930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.996218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.996227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.996437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.996445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.996749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.996757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.997040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.997048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.997244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.997253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.997545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.997554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.997711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.997719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.998026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.998034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.397 [2024-12-06 17:02:42.998319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.397 [2024-12-06 17:02:42.998327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.397 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:42.998661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:42.998671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:42.998977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:42.998986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:42.999282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:42.999292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:42.999634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:42.999642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:42.999928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:42.999936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.000230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.000239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.000537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.000545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.000833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.000840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.001141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.001150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.001455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.001464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.001792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.001801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.002096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.002107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.002441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.002450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.002747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.002756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.003040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.003049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.003341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.003350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.003602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.003610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.003788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.003798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.003951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.003960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.004273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.004283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.004589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.004597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.004900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.004909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.005220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.005229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.005525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.005533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.005825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.005833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.006132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.675 [2024-12-06 17:02:43.006141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.675 qpair failed and we were unable to recover it. 00:35:54.675 [2024-12-06 17:02:43.006440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.006449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.006746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.006755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.007057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.007065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.007360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.007368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.007668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.007676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.007970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.007979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.008270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.008279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.008443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.008451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.008738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.008747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.009045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.009053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.009347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.009355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.009633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.009642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.009950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.009959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.010245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.010253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.010559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.010568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.010860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.010869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.011181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.011190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.011526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.011535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.011895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.011904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.012202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.012210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.012490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.012499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.012782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.012790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.013102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.013110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.013298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.013306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.013613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.013621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.013921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.013929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.014227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.014235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.014538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.014547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.676 [2024-12-06 17:02:43.014840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.676 [2024-12-06 17:02:43.014848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.676 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.015153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.015161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.015463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.015472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.015759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.015767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.016045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.016053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.016228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.016238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.016545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.016553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.016873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.016881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.017203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.017211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.017505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.017513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.017794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.017802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.018094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.018105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.018410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.018419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.018699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.018707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.019045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.019053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.019247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.019256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.019564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.019572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.019877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.019886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.020175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.020183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.020351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.020360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.020673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.020682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.020972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.020981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.021130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.021139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.021456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.021464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.021808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.021817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.022113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.022123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.022383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.022392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.022731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.022740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.023023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.023031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.677 [2024-12-06 17:02:43.023384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.677 [2024-12-06 17:02:43.023392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.677 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.023669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.023677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.024013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.024021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.024336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.024345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.024625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.024633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.024782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.024790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.025093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.025105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.025388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.025396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.025532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.025540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.025801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.025809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.026109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.026118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.026417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.026425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.026710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.026719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.027059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.027068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.027364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.027373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.027655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.027664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.027955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.027966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.028267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.028276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.028613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.028621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.028914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.028921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.029200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.029209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.029510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.029518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.029798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.029806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.030143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.030152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.030445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.030752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.030760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.031043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.031052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.031355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.031363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.678 qpair failed and we were unable to recover it. 00:35:54.678 [2024-12-06 17:02:43.031727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.678 [2024-12-06 17:02:43.031737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.032036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.032046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.032356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.032365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.032643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.032651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.032938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.032947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.033231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.033239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.033517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.033525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.033861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.033869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.034157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.034166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.034476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.034484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.034779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.034787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.035067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.035076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.035277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.035285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.035539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.035547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.035824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.035832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.036028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.036037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.036322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.036330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.036611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.036619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.036897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.036906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.037249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.037258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.037573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.037582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.037862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.037870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.038160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.038168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.038492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.038501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.038681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.038688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.038985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.038993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.039291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.039299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.039590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.039598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.679 [2024-12-06 17:02:43.039876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.679 [2024-12-06 17:02:43.039887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.679 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.040037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.040046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.040404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.040414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.040712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.040721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.041006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.041014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.041345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.041355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.041651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.041659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.041964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.041972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.042274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.042282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.042479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.042487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.042794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.042802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.043102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.043111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.043397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.043405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.043691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.043699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.043983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.043992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.044339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.044348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.044648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.044656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.044956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.044965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.045257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.045265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.045559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.045568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.045853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.045862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.046139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.046148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.046423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.046431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.046732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.046741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.047033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.047042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.047398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.047408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.047692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.047701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.047985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.047994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.048274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.048283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.680 [2024-12-06 17:02:43.048638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.680 [2024-12-06 17:02:43.048647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.680 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.048938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.048946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.049241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.049250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.049596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.049605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.049890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.049898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.050181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.050190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.050470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.050478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.050763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.050771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.051051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.051060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.051351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.051359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.051676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.051686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.051985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.051995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.052319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.052328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.052639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.052647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.052947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.052955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.053118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.053126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.053298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.053306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.053477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.053485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.053808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.053816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.054128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.054137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.054437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.054445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.054735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.054743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.055037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.681 [2024-12-06 17:02:43.055045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.681 qpair failed and we were unable to recover it. 00:35:54.681 [2024-12-06 17:02:43.055349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.055357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.055659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.055667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.055980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.056269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.056279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.056588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.056596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.056889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.056898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.057202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.057210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.057500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.057508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.057692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.057702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.057976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.057984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.058313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.058322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.058604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.058612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.058894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.058902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.059187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.059195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.059509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.059517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.059808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.059818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.060113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.060121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.060411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.060419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.060722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.060730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.060932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.060940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.061267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.061276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.061595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.061604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.061892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.061900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.062240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.062249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.062575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.062583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.062888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.062896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.063084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.063092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.063386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.063394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.682 qpair failed and we were unable to recover it. 00:35:54.682 [2024-12-06 17:02:43.063687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.682 [2024-12-06 17:02:43.063695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.063983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.063991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.064272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.064280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.064574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.064582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.064876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.064884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.065202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.065211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.065360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.065370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.065677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.065686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.065967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.065977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.066256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.066265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.066544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.066552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.066889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.066897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.067209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.067218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.067530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.067537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.067821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.067829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.068028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.068037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.068304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.068313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.068604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.068612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.068905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.068914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.069200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.069209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.069493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.069502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.069837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.069846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.070125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.070133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.070410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.070419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.070699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.070707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.071041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.071050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.071344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.071353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.071658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.071669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.683 qpair failed and we were unable to recover it. 00:35:54.683 [2024-12-06 17:02:43.071943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.683 [2024-12-06 17:02:43.071952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.072255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.072264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.072572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.072580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.072870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.072878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.073120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.073128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.073418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.073427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.073729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.073738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.074086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.074094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.074377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.074385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.074581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.074590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.074898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.074906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.075122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.075130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.075421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.075429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.075736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.075744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.076051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.076058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.076389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.076397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.076705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.076713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.077008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.077016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.077325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.077333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.077617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.077625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.077907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.077915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.078216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.078225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.078507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.078516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.078817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.078825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.079168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.079177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.079463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.079472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.079755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.079763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.080078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.080086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.080406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.684 [2024-12-06 17:02:43.080414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.684 qpair failed and we were unable to recover it. 00:35:54.684 [2024-12-06 17:02:43.080732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.080741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.081028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.081036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.081346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.081355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.081659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.081667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.081947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.081955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.082237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.082245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.082451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.082459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.082771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.082779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.083057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.083065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.083348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.083356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.083663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.083672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.083966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.083974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.084249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.084258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.084533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.084541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.084822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.084830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.085139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.085447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.085455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.085750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.085758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.086050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.086058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.086372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.086381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.086675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.086683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.086982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.086991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.087270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.087279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.087587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.087595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.087941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.087949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.088164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.088172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.088386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.088394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.088574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.685 [2024-12-06 17:02:43.088583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.685 qpair failed and we were unable to recover it. 00:35:54.685 [2024-12-06 17:02:43.088857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.088865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.089046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.089055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.089355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.089364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.089667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.089676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.089951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.089959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.090152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.090159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.090476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.090484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.090767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.090775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.091053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.091062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.091403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.091412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.091711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.091719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.091868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.091877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.092206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.092214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.092381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.092391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.092716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.092724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.093038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.093046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.093325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.093334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.093612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.093620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.093902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.093911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.094187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.094196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.094473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.094482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.094791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.094799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.095092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.095104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.095278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.095288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.095611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.095619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.095821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.095830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.095980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.095988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.096284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.686 [2024-12-06 17:02:43.096292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.686 qpair failed and we were unable to recover it. 00:35:54.686 [2024-12-06 17:02:43.096561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.096569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.096911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.096919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.097174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.097182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.097500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.097508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.097798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.097807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.098191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.098199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.098504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.098512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.098800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.098808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.098974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.098982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.099315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.099323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.099664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.099672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.099956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.099964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.100248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.100256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.100530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.100539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.100826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.100835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.101124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.101133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.101438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.101446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.101739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.101747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.102014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.102022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.102303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.102311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.102596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.102605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.102885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.102894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.103175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.103183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.103466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.687 [2024-12-06 17:02:43.103474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.687 qpair failed and we were unable to recover it. 00:35:54.687 [2024-12-06 17:02:43.103767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.103775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.104068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.104076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.104357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.104366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.104650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.104658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.104864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.104872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.105195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.105203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.105514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.105523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.105829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.105837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.106129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.106137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.106436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.106444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.106758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.106768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.106950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.106958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.107340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.107349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.107651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.107660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.107969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.107977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.108267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.108275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.108664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.108672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.108980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.108989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.109313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.109321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.109611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.109619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.109906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.110214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.110223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.110561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.110570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.110869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.110879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.111188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.111444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.111453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.111759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.111767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.112055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.112063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.112396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.688 [2024-12-06 17:02:43.112405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.688 qpair failed and we were unable to recover it. 00:35:54.688 [2024-12-06 17:02:43.112692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.112701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.113001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.113010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.113314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.113323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.113616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.113624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.113912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.113920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.114228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.114237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.114541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.114550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.114844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.114852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.115048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.115057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.115449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.115458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.115762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.115770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.116061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.116070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.116403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.116411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.116702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.116711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.117006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.117014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.117241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.117249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.117558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.117566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.117873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.117881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.118187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.118195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.118365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.118374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.118686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.118694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.119040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.119050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.119345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.119353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.119636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.119645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.119929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.119938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.120234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.120243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.120438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.120446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.120757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.120767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.121048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.689 [2024-12-06 17:02:43.121056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.689 qpair failed and we were unable to recover it. 00:35:54.689 [2024-12-06 17:02:43.121378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.121386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.121682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.121690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.121960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.121969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.122267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.122276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.122585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.122593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.122887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.122895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.123192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.123201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.123497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.123505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.123787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.123795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.124079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.124088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.124458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.124466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.124758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.124766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.124960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.124968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.125116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.125125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.125393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.125402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.125682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.125691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.125978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.125986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.126273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.126282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.126569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.126578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.126861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.126869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.127049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.127057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.127346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.127355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.127640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.127650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.127834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.127842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.128158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.128167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.128478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.128487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.128781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.128790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.129081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.129090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.129428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.129437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.129733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.690 [2024-12-06 17:02:43.129741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.690 qpair failed and we were unable to recover it. 00:35:54.690 [2024-12-06 17:02:43.130095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.130108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.130283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.130292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.130576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.130586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.130877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.130886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.131166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.131175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.131494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.131503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.131800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.131808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.132094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.132106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.132404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.132413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.132701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.132710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.133024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.133033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.133320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.133328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.133642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.133651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.133918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.133927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.134209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.134218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.134535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.134544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.134882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.134890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.135201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.135210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.135389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.135399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.135591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.135600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.135896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.135904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.136203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.136212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.136505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.136514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.136795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.136803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.137094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.137105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.137476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.137484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.137781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.137790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.138077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.138086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.691 [2024-12-06 17:02:43.138410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.691 [2024-12-06 17:02:43.138419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.691 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.138710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.138719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.139024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.139032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.139314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.139322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.139492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.139501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.139780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.139788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.140079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.140087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.140391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.140400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.140693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.140702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.141023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.141032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.141322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.141330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.141624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.141633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.141930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.141938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.142260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.142269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.142474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.142484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.142764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.142773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.142933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.142941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.143223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.143231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.143535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.143543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.143780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.143789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.144071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.144080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.144445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.144454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.144763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.144772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.145069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.145078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.145415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.145424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.145708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.145716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.146029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.146037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.146362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.146371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.146685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.146694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.146850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.146859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.692 qpair failed and we were unable to recover it. 00:35:54.692 [2024-12-06 17:02:43.147169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.692 [2024-12-06 17:02:43.147178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.147486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.147494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.147780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.147789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.148083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.148092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.148403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.148412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.148608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.148616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.148767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.148775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.149070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.149078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.149380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.149389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.149679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.149688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.149987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.149996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.150334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.150343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.150681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.150689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.150963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.150972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.151307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.151316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.151583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.151592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.151886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.151894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.152178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.152187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.152540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.152548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.152720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.152729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.153046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.153055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.153360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.153368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.153670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.153678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.153965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.693 [2024-12-06 17:02:43.153974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.693 qpair failed and we were unable to recover it. 00:35:54.693 [2024-12-06 17:02:43.154263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.154274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.154579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.154588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.154866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.154874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.155157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.155165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.155475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.155484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.155768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.155777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.156123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.156132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.156475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.156484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.156788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.156796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.157077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.157085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.157390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.157398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.157745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.157753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.158041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.158050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.158368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.158377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.158668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.158676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.158959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.158967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.159280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.159289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.159629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.159637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.159951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.159960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.160255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.160265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.160582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.160590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.160886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.160895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.161182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.161191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.161488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.161496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.161778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.161786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.162071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.162080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.162385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.162395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.162686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.162694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.162884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.162892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.694 qpair failed and we were unable to recover it. 00:35:54.694 [2024-12-06 17:02:43.163197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.694 [2024-12-06 17:02:43.163207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.163498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.163507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.163786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.163795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.164074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.164083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.164386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.164395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.164586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.164595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.164872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.164880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.165161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.165170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.165483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.165492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.165776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.165785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.166063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.166072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.166389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.166400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.166682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.166691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.166855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.166864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.167170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.167179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.167498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.167506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.167646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.167655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.167979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.167988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.168312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.168321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.168657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.168666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.168965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.168975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.169284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.169292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.169644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.169653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.169801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.169810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.170123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.170133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.170361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.170370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.170684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.170692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.171040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.171049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.171338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.695 [2024-12-06 17:02:43.171347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.695 qpair failed and we were unable to recover it. 00:35:54.695 [2024-12-06 17:02:43.171683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.171692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.171966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.171975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.172143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.172152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.172469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.172478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.172753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.172762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.173107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.173116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.173296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.173305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.173602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.173611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.173903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.173912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.174190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.174199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.174537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.174546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.174826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.174834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.175127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.175136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.175552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.175561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.175864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.175872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.176206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.176215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.176488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.176497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.176777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.176785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.177069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.177077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.177371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.177380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.177671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.177679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.177964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.177972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.178309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.178321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.178616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.178625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.178935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.178944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.179262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.179271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.179617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.179625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.179928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.179937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.180226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.180236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.180535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.696 [2024-12-06 17:02:43.180545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.696 qpair failed and we were unable to recover it. 00:35:54.696 [2024-12-06 17:02:43.180848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.180857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.181156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.181165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.181457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.181468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.181623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.181633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.181924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.181932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.182245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.182254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.182584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.182593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.182905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.182914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.183222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.183231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.183605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.183614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.183905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.183914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.184272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.184282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.184615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.184624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.184927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.184937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.185221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.185230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.185565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.185575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.185864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.185873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.186162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.186174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.186363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.186375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.186631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.186640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.186923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.186932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.187222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.187232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.187520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.187529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.187812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.187820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.188103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.188114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.188296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.188305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.188609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.188618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.188813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.188822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.189140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.697 [2024-12-06 17:02:43.189149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.697 qpair failed and we were unable to recover it. 00:35:54.697 [2024-12-06 17:02:43.189360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.189369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.189675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.189684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.189963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.189971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.190269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.190280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.190614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.190623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.190794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.190803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.191161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.191173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.191399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.191408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.191716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.191726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.191924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.191933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.192271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.192284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.192613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.192623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.192915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.192924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.193212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.193223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.193535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.193544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.193740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.193750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.194062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.194071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.194353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.194364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.194687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.194696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.195000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.195009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.195300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.195309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.195599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.195611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.195963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.195972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.196168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.196178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.196499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.196508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.196799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.196808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.196991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.197000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.197313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.197324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.197556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.197566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.197730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.698 [2024-12-06 17:02:43.197740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.698 qpair failed and we were unable to recover it. 00:35:54.698 [2024-12-06 17:02:43.198078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.198087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.198399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.198409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.198689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.198698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.198994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.199003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.199300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.199310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.199602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.199611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.199928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.199938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.200288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.200298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.200621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.200631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.200790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.200801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.201139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.201149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.201476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.201487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.201795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.201804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.202111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.202123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.202439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.202448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.202776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.202787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.203087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.203096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.203379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.203388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.203710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.203720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.204017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.204026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.204245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.204257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.204576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.204585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.699 [2024-12-06 17:02:43.204900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.699 [2024-12-06 17:02:43.204911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.699 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.205255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.205265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.205590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.205599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.205915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.205924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.206225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.206237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.206521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.206530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.206843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.206852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.207151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.207161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.207552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.207561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.207871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.207883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.208255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.208264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.208560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.208569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.208857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.208867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.209154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.209163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.209476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.209488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.209807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.209818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.210154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.210163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.210447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.210457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.210763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.210774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.211078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.211087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.211393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.211403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.211704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.211713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.212055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.212064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.212361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.212371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.212655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.212665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.212998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.213008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.213346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.213355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.213640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.213649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.213816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.213826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.700 qpair failed and we were unable to recover it. 00:35:54.700 [2024-12-06 17:02:43.214148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.700 [2024-12-06 17:02:43.214157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.214472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.214482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.214787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.214798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.215123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.215133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.215434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.215443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.215749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.215759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.216072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.216081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.216401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.216410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.216702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.216711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.217030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.217043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.217411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.217420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.217685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.217694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.217977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.217986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.218213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.218222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.218435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.218444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.218751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.218760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.219060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.219069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.219373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.219383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.219681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.219690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.219963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.219971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.220266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.220275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.220428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.220437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.220748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.220757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.221051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.221059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.221380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.221752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.221760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.222064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.222072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.222389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.222397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.701 qpair failed and we were unable to recover it. 00:35:54.701 [2024-12-06 17:02:43.222739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.701 [2024-12-06 17:02:43.222747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.223055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.223065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.223389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.223398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.223682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.223691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.224010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.224018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.224318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.224327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.224676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.224685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.224991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.225000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.225246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.225255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.225613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.225621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.225922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.225931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.226098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.226115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.226386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.226394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.226756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.226765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.227073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.227081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.227386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.227395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.227681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.227690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.228001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.228010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.228308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.228316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.228655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.228664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.228982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.228992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.229140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.229149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.229454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.229463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.229636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.229645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.229951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.229960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.230264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.230272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.230567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.230576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.230860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.230869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.702 qpair failed and we were unable to recover it. 00:35:54.702 [2024-12-06 17:02:43.231068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.702 [2024-12-06 17:02:43.231076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.231396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.231405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.231685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.231694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.231972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.231981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.232286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.232295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.232598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.232607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.232901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.232910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.233204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.233213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.233516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.233525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.233820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.233828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.234126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.234134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.234436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.234444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.234750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.234759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.235037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.235048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.235332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.235342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.235630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.235639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.235789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.235798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.235988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.235997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.236316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.236326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.236605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.236615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.236929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.236938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.237266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.237275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.237575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.237583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.237883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.237892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.238206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.238215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.238517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.238526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.238801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.238810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.239104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.239113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.703 [2024-12-06 17:02:43.239305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.703 [2024-12-06 17:02:43.239314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.703 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.239623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.239631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.239937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.239946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.240242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.240251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.240556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.240565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.240855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.240863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.241151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.241160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.241534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.241542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.241847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.241856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.242167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.242175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.242470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.242480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.242865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.242874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.243212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.243221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.243343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.243352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.243704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.243713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.244015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.244024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.244313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.244321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.244556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.244565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.244884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.244893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.245172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.245181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.245456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.245465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.245658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.245667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.245950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.245958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.246249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.246258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.246531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.246540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.246820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.246830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.247150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.247159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.247425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.247433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.247613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.247622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.704 qpair failed and we were unable to recover it. 00:35:54.704 [2024-12-06 17:02:43.247919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.704 [2024-12-06 17:02:43.247928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.248240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.248250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.248565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.248573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.248870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.248879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.249183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.249192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.249514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.249522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.249831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.249839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.250122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.250131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.250441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.250449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.250739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.250748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.251139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.251148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.251451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.251460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.251768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.252064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.252072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.252359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.252368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.252692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.252700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.252980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.252988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.253202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.253597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.253605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.253808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.253816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.254147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.254156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.254532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.254541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.254842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.705 [2024-12-06 17:02:43.254851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.705 qpair failed and we were unable to recover it. 00:35:54.705 [2024-12-06 17:02:43.255147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.255156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.255439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.255448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.255850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.255858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.256155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.256164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.256459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.256468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.256758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.256767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.257061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.257070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.257414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.257423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.257722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.257731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.258025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.258034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.258334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.258343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.258505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.258514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.258808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.258817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.259103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.259115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.259413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.259421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.259619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.259628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.259933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.259942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.260290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.260300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.260584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.260593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.260869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.260878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.261177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.261185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.261490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.261499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.261800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.261810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.262099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.262114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.262268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.262277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.262572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.262582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.262749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.262758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.263051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.706 [2024-12-06 17:02:43.263059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.706 qpair failed and we were unable to recover it. 00:35:54.706 [2024-12-06 17:02:43.263350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.263359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.263642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.263651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.263939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.263947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.264238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.264247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.264541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.264550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.264823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.264832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.265123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.265133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.265438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.265447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.265734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.265742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.266039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.266048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.266366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.266374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.266620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.266629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.266958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.266967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.267253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.267262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.267570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.267579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.267884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.267893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.268195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.268204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.268503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.268512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.268847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.268856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.269137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.269146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.269452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.269461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.269751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.269759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.270042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.270051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.270350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.270359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.270634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.270643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.270927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.270937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.271230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.271239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.271532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.271541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.707 qpair failed and we were unable to recover it. 00:35:54.707 [2024-12-06 17:02:43.271827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.707 [2024-12-06 17:02:43.271836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.272119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.272128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.272402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.272410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.272690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.272698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.272979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.272987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.273289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.273298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.273479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.273488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.273778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.273787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.274065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.274073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.274233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.274242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.274508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.274516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.274813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.274821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.275107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.275115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.275466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.275474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.275775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.275784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.276059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.276068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.276366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.276375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.276653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.276662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.276869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.276878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.277138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.277147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.277307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.277316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.277637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.277646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.277944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.277953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.278233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.278242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.278536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.278544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.278822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.278830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.279114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.279122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.279437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.279736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.708 [2024-12-06 17:02:43.279745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.708 qpair failed and we were unable to recover it. 00:35:54.708 [2024-12-06 17:02:43.280029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.280037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.280242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.280251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.280572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.280580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.280858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.280866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.281143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.281152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.281446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.281455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.281830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.281839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.282123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.282132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.282470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.282481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.282792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.282801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.283110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.283119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.283416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.283424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.283714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.283723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.284014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.284023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.284195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.284204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.284524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.284532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.284856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.284865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.285153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.285162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.285384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.285393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.285665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.285675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.285955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.286246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.286254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.286525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.286534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.286820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.286828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.287197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.287206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.287481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.287490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.287770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.287778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.288067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.709 [2024-12-06 17:02:43.288075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.709 qpair failed and we were unable to recover it. 00:35:54.709 [2024-12-06 17:02:43.288377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.288385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.288658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.288667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.288857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.288866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.289167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.289176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.289456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.289466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.289743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.289752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.289931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.289940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.290236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.290245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.290412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.290421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.290720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.290729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.291016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.291025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.291326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.291335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.291668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.291676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.291975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.291984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.292264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.292273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.292570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.292579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.292872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.292881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.293154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.293163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.293459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.293468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.293816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.293825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.294159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.294170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.294445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.294454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.294730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.294739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.295026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.295035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.295334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.295343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.295657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.295666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.295938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.295948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.296242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.296251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.296530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.296539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.296841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.296849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.710 [2024-12-06 17:02:43.297153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.710 [2024-12-06 17:02:43.297161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.710 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.297451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.297460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.297739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.297748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.298090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.298107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.298420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.298429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.298778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.298787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.299072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.299081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.299266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.299275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.299582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.299592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.299882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.299891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.300186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.300195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.300508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.300516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.300809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.300818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.301122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.301132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.301996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.302017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.302334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.302345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.302631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.302640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.302942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.302950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.303257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.303266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.303545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.303553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.303864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.303873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.304201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.304210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.304519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.304527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.304907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.304916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.711 qpair failed and we were unable to recover it. 00:35:54.711 [2024-12-06 17:02:43.305213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.711 [2024-12-06 17:02:43.305223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.305545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.305554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.305836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.305845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.306149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.306158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.306446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.306455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.306769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.306778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.306940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.306951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.307257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.307266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.307565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.307574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.307864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.308161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.308171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.308482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.308491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.308780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.308789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.309080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.309089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.309384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.309393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.309612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.309620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.309910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.309920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.310199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.310208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.310526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.310535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.310826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.310835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.311120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.311129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.311431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.311440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.311755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.311764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.312051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.312059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.312364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.312372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.312701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.312710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.313012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.313021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.313333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.313342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.313643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.313652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.313932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.313940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.314242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.314251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.712 [2024-12-06 17:02:43.314542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.712 [2024-12-06 17:02:43.314550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.712 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.314828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.314837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.315184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.315194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.315479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.315488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.315764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.315773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.316054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.316062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.316361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.316370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.316709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.316718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.316988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.316997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.317312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.317321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.317612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.317621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.317937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.317946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.318233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.318242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.318550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.318558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.318844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.318852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.319131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.319142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.319462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.319471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.319759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.319767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.320060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.320069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.320651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.320667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.320972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.320982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.321270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.321279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.321601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.321609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.321910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.321919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.322265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.322274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.322553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.322562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.322738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.322747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.323014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.323022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.323234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.323243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.323543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.323552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.323728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.323737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.324038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.324047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.713 qpair failed and we were unable to recover it. 00:35:54.713 [2024-12-06 17:02:43.324371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.713 [2024-12-06 17:02:43.324379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.324667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.324676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.324973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.324981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.325296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.325305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.325629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.325637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.325923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.325931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.326222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.326231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.326567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.326577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.326886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.326894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.327086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.327094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.327376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.327386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.327597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.327606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.327832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.327841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.328117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.328126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.328432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.328441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.328735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.328743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.329055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.329064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.329386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.329395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.329635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.329643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.329833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.329842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.330140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.330149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.330454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.330463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.330661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.330670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.330862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.330871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.331052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.331061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.331389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.331397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.331754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.331763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.332086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.332094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.332401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.332410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.332709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.332718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.332990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.332998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.333325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.714 [2024-12-06 17:02:43.333335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.714 qpair failed and we were unable to recover it. 00:35:54.714 [2024-12-06 17:02:43.333637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.333645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.333959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.333968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.334302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.334311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.334614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.334623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.334813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.334821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.335001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.335009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.335085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.335093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.335283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.335292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.335631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.335639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.335804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.335813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.336134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.336143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.336446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.336455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.336831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.336839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.337021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.337031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.337254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.337263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.337524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.337532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.337814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.337823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.338127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.338136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.338320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.338328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.338607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.338615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.338938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.338947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.339287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.339296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.339630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.339638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.339849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.339858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.340147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.340156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.340354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.340363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.340682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.340690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.340976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.340985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.341297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.341306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.341618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.341627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.715 qpair failed and we were unable to recover it. 00:35:54.715 [2024-12-06 17:02:43.341907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.715 [2024-12-06 17:02:43.341915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.342234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.342244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.342550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.342559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.342819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.342828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.342898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.342907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.343196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.343205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.343472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.343481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.343852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.343860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.344145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.344620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.344629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.344947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.344956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.345190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.345199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.345494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.345503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.345789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.345798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.346134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.346142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.346336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.346345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.346631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.346639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.346916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.346925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.347257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.347266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.347438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.347447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.347635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.347644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.347982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.347991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.348421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.348429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.348605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.348614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.348923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.348931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.349164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.349173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.349516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.349525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.349751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.349760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.716 [2024-12-06 17:02:43.350120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.716 [2024-12-06 17:02:43.350128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.716 qpair failed and we were unable to recover it. 00:35:54.717 [2024-12-06 17:02:43.350404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-12-06 17:02:43.350413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.717 [2024-12-06 17:02:43.350702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.717 [2024-12-06 17:02:43.350711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.717 qpair failed and we were unable to recover it. 00:35:54.992 [2024-12-06 17:02:43.350984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-12-06 17:02:43.350994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-12-06 17:02:43.351335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-12-06 17:02:43.351344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-12-06 17:02:43.351623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-12-06 17:02:43.351632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-12-06 17:02:43.351937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-12-06 17:02:43.351945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-12-06 17:02:43.352277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.992 [2024-12-06 17:02:43.352286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.992 qpair failed and we were unable to recover it. 00:35:54.992 [2024-12-06 17:02:43.352469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.352477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.352652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.352660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.352946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.352955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.353290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.353299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.353636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.353644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.353971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.353981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.354299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.354309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.354614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.354623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.354916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.354925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.355276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.355285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.355620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.355628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.355905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.355914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.356192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.356201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.356528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.356537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.356828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.356837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.357135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.357143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.357335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.357344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.357656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.357665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.357945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.357954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.358287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.358584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.358593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.358947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.358955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.359155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.359164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.359462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.359471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.359812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.359820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.360140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.360149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.360499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.360508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.360838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.360846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.361152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.993 [2024-12-06 17:02:43.361162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.993 qpair failed and we were unable to recover it. 00:35:54.993 [2024-12-06 17:02:43.361367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.361376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.361716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.361725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.362002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.362011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.362395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.362404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.362713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.362722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.362892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.362901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.363200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.363209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.363464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.363473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.363783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.363792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.364065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.364073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.364384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.364394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.364696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.364704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.364888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.364896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.365171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.365181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.365516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.365524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.365800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.365808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.366001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.366010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.366215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.366224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.366529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.366537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.366851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.366860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.367136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.367145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.367359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.367368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.367728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.367736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.367906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.367915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.368248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.368257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.368613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.368621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.368812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.368820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.369113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.369121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.994 [2024-12-06 17:02:43.369373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.994 [2024-12-06 17:02:43.369381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.994 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.369552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.369561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.369747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.369756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.370112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.370121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.370464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.370473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.370798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.370806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.371045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.371053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.371364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.371372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.371663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.371672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.371959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.371967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.372351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.372359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.372540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.372548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.372820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.372829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.373009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.373018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.373360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.373368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.373580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.373590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.373759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.373768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.373959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.373967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.374314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.374322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.374592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.374601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.374934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.374943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.375112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.375121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.375336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.375345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.375634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.375643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.375968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.375977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.376306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.376315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.376583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.376592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.376794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.376803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.377123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.377132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.377407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.377416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.377706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.995 [2024-12-06 17:02:43.377715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.995 qpair failed and we were unable to recover it. 00:35:54.995 [2024-12-06 17:02:43.378019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.378028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.378235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.378244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.378536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.378545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.378710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.378719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.379028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.379036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.379350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.379359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.379639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.379647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.379929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.379937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.380338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.380638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.380647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.380950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.380959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.381131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.381140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.381214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.381223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.381515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.381524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.381689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.381698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.381742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.381750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.382103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.382112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.382281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.382289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.382581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.382589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.382886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.382895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.383062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.383071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.383310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.383319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.383562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.383571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.383795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.383804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.383940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.383950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.384200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.384208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.384498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.384507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.384816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.384825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.996 qpair failed and we were unable to recover it. 00:35:54.996 [2024-12-06 17:02:43.385044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.996 [2024-12-06 17:02:43.385053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.385363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.385371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.385607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.385616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.385785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.385793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.386141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.386150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.386498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.386506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.386683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.386692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.386910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.386919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.387155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.387164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.387439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.387448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.387771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.387779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.387966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.387975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.388184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.388193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.388411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.388419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.388798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.388807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.389083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.389092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.389358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.389367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.389614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.389623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.389887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.389896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.390250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.390259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.390432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.390441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.390757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.390766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.391086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.391095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.391466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.391475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.391788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.391797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.392147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.392157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.392450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.392458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.392773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.392782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.997 [2024-12-06 17:02:43.393072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.997 [2024-12-06 17:02:43.393081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.997 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.393384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.393393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.393703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.393712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.394033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.394041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.394339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.394348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.394628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.394636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.394932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.394940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.395276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.395285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.395643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.395653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.395963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.395971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.396358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.396367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.396723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.396731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.397027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.397036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.397388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.397398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.397731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.397739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.398041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.398050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.398376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.398385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.398559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.398567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.398882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.398891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.399248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.399257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.399497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.399505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.399717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.399726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.399984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.399993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.400336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.400345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.998 qpair failed and we were unable to recover it. 00:35:54.998 [2024-12-06 17:02:43.400655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.998 [2024-12-06 17:02:43.400663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.400851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.400859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.401026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.401034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.401362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.401371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.401673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.401681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.401860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.401869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.402183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.402191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.402480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.402489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.402780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.402788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.402999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.403008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.403315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.403324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.403616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.403625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.403975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.403983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.404140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.404149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.404484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.404493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.404723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.404732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.405017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.405026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.405149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.405158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.405464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.405472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.405779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.405788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.406111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.406120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.406415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.406423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.406711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.406719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.406998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.407006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.407401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.407411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.407596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.407605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.407917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.407925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.408250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.408259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.408558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:54.999 [2024-12-06 17:02:43.408566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:54.999 qpair failed and we were unable to recover it. 00:35:54.999 [2024-12-06 17:02:43.408722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.408730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.408873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.408881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.409191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.409200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.409570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.409579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.409877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.409886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.410178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.410187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.410500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.410509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.410813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.410822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.411171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.411180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.411469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.411477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.411660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.411669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.411851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.411860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.412150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.412159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.412235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.412244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.412561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.412569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.412862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.412871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.413164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.413173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.413439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.413448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.413750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.413759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.413959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.413967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.414388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.414397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.414746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.414754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.415064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.415073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.415389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.415398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.415717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.415726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.416010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.416019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.416325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.416334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.416590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.000 [2024-12-06 17:02:43.416599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.000 qpair failed and we were unable to recover it. 00:35:55.000 [2024-12-06 17:02:43.416898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.416906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.417125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.417134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.417462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.417470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.417630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.417638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.417902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.417911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.418203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.418212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.418528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.418536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.418733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.418742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.419012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.419020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.419381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.419390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.419649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.419658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.419970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.419979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.420208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.420217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.420371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.420380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.420653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.420661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.421039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.421047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.421449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.421458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.421723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.421732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.421891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.421900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.422121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.422130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.422406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.422415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.422687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.422695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.422976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.422984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.423313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.423322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.423482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.423491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.423742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.423751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.424074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.424082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.424461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.424470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.424811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.424820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.001 qpair failed and we were unable to recover it. 00:35:55.001 [2024-12-06 17:02:43.425106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.001 [2024-12-06 17:02:43.425115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.425367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.425376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.425703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.425711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.425966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.425974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.426248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.426257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.426507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.426516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.426817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.426826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.427012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.427020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.427371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.427380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.427686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.427694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.427864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.427872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.428192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.428201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.428468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.428478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.428779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.428788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.428979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.428989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.429385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.429395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.429588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.429597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.429737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.429746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.429951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.429962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.430171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.430181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.430362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.430371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.430713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.430722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.431024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.431033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.431369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.431378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.431671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.431684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.431722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.431730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.431973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.431982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.432214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.432224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.432556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.432565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.002 [2024-12-06 17:02:43.432852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.002 [2024-12-06 17:02:43.432861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.002 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.433015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.433024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.433441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.433451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.433733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.433741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.434018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.434028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.434242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.434251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.434561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.434571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.434890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.434899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.435082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.435092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.435423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.435432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.435760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.435771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.436106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.436116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.436295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.436305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.436653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.436662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.436823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.436832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.437172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.437182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.437389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.437400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.437555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.437563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.437747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.437756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.437941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.437950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.438377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.438387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.438690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.438699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.438863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.438873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.439175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.439184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.439527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.439537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.439824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.439833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.440139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.440148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.440338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.440347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.440524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.440533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.003 qpair failed and we were unable to recover it. 00:35:55.003 [2024-12-06 17:02:43.440830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.003 [2024-12-06 17:02:43.440841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.441152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.441161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.441339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.441347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.441544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.441553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.441703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.441711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.441982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.441991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.442308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.442318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.442607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.442620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.442918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.442927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.443239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.443248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.443550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.443559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.443896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.443906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.444220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.444230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.444423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.444432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.444726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.444735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.445041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.445050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.445376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.445386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.445691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.445702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.445890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.445899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.446167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.446176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.446479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.446489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.446782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.446791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.447138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.447147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.447421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.447430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.004 qpair failed and we were unable to recover it. 00:35:55.004 [2024-12-06 17:02:43.447744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.004 [2024-12-06 17:02:43.447753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.448058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.448068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.448376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.448386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.448774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.448784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.448971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.448980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.449164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.449175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.449438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.449447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.449767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.449776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.450076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.450085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.450487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.450497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.450662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.450671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.450964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.450974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.451299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.451309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.451617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.451626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.451970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.451979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.452294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.452306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.452618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.452629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.452935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.452944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.453256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.453266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.453607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.453616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.453912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.453921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.454242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.454251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.454561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.454571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.454892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.454902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.455220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.455232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.455569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.455577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.455877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.455886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.456033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.456043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.456379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.005 [2024-12-06 17:02:43.456388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.005 qpair failed and we were unable to recover it. 00:35:55.005 [2024-12-06 17:02:43.456705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.456714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.457057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.457067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.457258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.457268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.457547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.457556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.457860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.457869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.458043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.458052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.458347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.458357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.458659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.458672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.458971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.458980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.459293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.459303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.459648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.459657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.459952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.459961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.460112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.460121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.460441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.460450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.460807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.460816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.461120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.461129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.461406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.461415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.461699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.461707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.462048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.462056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.462359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.462368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.462667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.462675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.462963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.462972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.463274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.463283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.463457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.463466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.463667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.463675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.463944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.463953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.464283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.464292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.464600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.006 [2024-12-06 17:02:43.464611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.006 qpair failed and we were unable to recover it. 00:35:55.006 [2024-12-06 17:02:43.464885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.464894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.465248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.465257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.465429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.465438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.465738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.465747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.466047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.466056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.466364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.466372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.466652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.466661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.466925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.466933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.467256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.467265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.467561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.467570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.467867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.467875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.468156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.468165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.468489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.468498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.468744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.468753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.469083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.469092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.469395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.469404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.469683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.469692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.469974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.469983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.470257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.470266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.470568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.470577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.470899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.470907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.471206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.471215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.471510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.471519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.471798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.471807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.472084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.472093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.472390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.472399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.472719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.472728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.473005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.473014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.007 qpair failed and we were unable to recover it. 00:35:55.007 [2024-12-06 17:02:43.473319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.007 [2024-12-06 17:02:43.473328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.473523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.473532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.473844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.473853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.474139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.474148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.474475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.474483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.474766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.474775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.475092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.475109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.475381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.475389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.475693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.475702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.475994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.476003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.476314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.476323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.476605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.476615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.476893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.476901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.477219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.477228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.477512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.477521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.477799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.477808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.478091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.478102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.478287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.478620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.478629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.478904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.478913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.479223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.479231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.479516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.479525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.479862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.479871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.480161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.480170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.480479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.480488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.480775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.480784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.481086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.481360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.481369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.008 [2024-12-06 17:02:43.481647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.008 [2024-12-06 17:02:43.481656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.008 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.482002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.482011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.482316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.482325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.482619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.482628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.482799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.482808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.483119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.483128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.483517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.483525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.483822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.483831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.483998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.484008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.484324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.484333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.484650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.484659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.484960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.484968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.485263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.485272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.485567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.485576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.485881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.485890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.486161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.486170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.486475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.486484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.486817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.486827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.487111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.487120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.487310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.487319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.487619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.487628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.487918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.487927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.488122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.488132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.488464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.488474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.488810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.488819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.489171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.489180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.489491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.489500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.489809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.489817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.490116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.490124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.009 [2024-12-06 17:02:43.490391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.009 [2024-12-06 17:02:43.490400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.009 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.490743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.490752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.491056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.491064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.491348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.491357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.491636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.491645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.491927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.491936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.492224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.492233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.492542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.492551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.492876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.492885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.493197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.493206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.493526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.493534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.493835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.493843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.494183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.494192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.494504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.494513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.494810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.494818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.495095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.495114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.495397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.495405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.495685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.495694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.495972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.495980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.496397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.496405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.496726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.496734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.497021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.497029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.497328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.497337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.497622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.010 [2024-12-06 17:02:43.497631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.010 qpair failed and we were unable to recover it. 00:35:55.010 [2024-12-06 17:02:43.497933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.497941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.498222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.498231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.498530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.498539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.498821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.498830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.499117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.499126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.499418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.499428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.499773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.499782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.500091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.500103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.500400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.500409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.500757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.500766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.501069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.501079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.501367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.501376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.501565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.501574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.501876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.501885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.502164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.502173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.502484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.502493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.502777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.502785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.502993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.503001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.503305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.503314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.503602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.503927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.503935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.504239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.504247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.504560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.504569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.504858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.504866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.505146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.505155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.505483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.505493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.505806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.505814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.506106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.506115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.011 [2024-12-06 17:02:43.506412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.011 [2024-12-06 17:02:43.506421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.011 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.506718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.506727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.507015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.507024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.507309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.507318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.507605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.507613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.507793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.507801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.508084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.508093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.508411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.508421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.508708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.508716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.509018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.509026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.509319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.509329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.509606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.509614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.509910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.509919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.510206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.510215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.510525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.510533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.510822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.510831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.511129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.511137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.511469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.511478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.511758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.511767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.512053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.512062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.512252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.512262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.512559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.512568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.512786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.512795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.513110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.513119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.513402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.513411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.513768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.513777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.514086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.514095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.514382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.514391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.514699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.514708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.515002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.012 [2024-12-06 17:02:43.515011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.012 qpair failed and we were unable to recover it. 00:35:55.012 [2024-12-06 17:02:43.515317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.515326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.515521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.515531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.515840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.515848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.516219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.516227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.516529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.516538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.516842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.516851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.517138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.517148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.517337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.517346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.517651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.517660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.517937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.517946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.518227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.518236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.518526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.518535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.518831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.518840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.519129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.519138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.519466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.519474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.519804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.519812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.520083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.520091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.520400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.520410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.520693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.520702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.520895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.520906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.521223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.521232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.521532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.521541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.521860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.521869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.522150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.522160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.522474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.522483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.522763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.522772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.523054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.523062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.523344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.523353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.523632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.013 [2024-12-06 17:02:43.523641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.013 qpair failed and we were unable to recover it. 00:35:55.013 [2024-12-06 17:02:43.523933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.523941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.524217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.524226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.524515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.524524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.524825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.524834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.525118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.525127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.525313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.525322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.525593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.525602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.525887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.525896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.526178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.526187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.526573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.526581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.526896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.526905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.527200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.527209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.527409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.527418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.527751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.527760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.528083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.528091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.528394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.528403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.528695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.528704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.528997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.529006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.529213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.529222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.529493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.529501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.529840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.529849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.530138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.530148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.530461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.530470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.530749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.530758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.531045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.531053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.531359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.531367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.531649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.531658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.531945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.531954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.014 qpair failed and we were unable to recover it. 00:35:55.014 [2024-12-06 17:02:43.532245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.014 [2024-12-06 17:02:43.532255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.532632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.532640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.532925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.532935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.533227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.533236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.533551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.533560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.533856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.533865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.534157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.534167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.534486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.534494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.534775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.534784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.535133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.535142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.535320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.535329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.535591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.535600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.535889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.535897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.536183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.536192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.536490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.536498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.536771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.536780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.537055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.537064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.537393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.537402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.537689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.537698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.537976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.537985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.538308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.538317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.538606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.538615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.538960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.538969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.539239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.539248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.539534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.539542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.539857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.539866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.540162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.540172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.540485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.540494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.540676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.540686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.015 [2024-12-06 17:02:43.540897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.015 [2024-12-06 17:02:43.540905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.015 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.541210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.541219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.541521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.541529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.541803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.541812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.541990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.541999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.542288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.542297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.542587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.542595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.542870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.542879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.543165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.543173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.543572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.543581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.543856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.543864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.544139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.544148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.544452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.544460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.544745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.544756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.545024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.545033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.545348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.545357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.545651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.545659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.545939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.545948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.546225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.546235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.546551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.546560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.546853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.546861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.547159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.547168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.547489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.547498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.547779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.016 [2024-12-06 17:02:43.547787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.016 qpair failed and we were unable to recover it. 00:35:55.016 [2024-12-06 17:02:43.548076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.548085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.548385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.548394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.548594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.548603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.548925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.548934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.549239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.549248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.549549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.549557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.549838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.549847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.550173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.550182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.550545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.550837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.550845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.551128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.551137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.551497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.551506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.551810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.551819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.552095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.552109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.552388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.552397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.552702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.552711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.553006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.553015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.553318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.553622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.553631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.553930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.553939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.554220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.554229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.554541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.554550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.554852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.554861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.555167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.555176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.555487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.555496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.555850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.555859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.556169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.556178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.017 [2024-12-06 17:02:43.556494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.017 [2024-12-06 17:02:43.556502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.017 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.556819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.556828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.557116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.557126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.557433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.557442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.557726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.557735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.558072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.558080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.558367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.558377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.558666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.558674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.558994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.559003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.559294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.559304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.559596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.559605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.559910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.559919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.560262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.560271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.560568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.560576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.560860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.560869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.561035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.561044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.561359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.561368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.561666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.561675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.561966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.561975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.562275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.562284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.562566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.562574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.562901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.562910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.563194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.563203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.563504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.563513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.563796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.563805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.564134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.564143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.564453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.564462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.564741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.564750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.565044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.565053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.018 [2024-12-06 17:02:43.565378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.018 [2024-12-06 17:02:43.565387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.018 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.565676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.565685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.566029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.566038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.566306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.566315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.566645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.566654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.566951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.566960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.567264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.567273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.567564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.567573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.567869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.567877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.568171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.568180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.568397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.568406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.568763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.568772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.569071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.569080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.569244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.569255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.569598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.569606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.569898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.569907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.570210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.570220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.570528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.570536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.570833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.570841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.571184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.571193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.571526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.571535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.571851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.571860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.572156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.572165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.572482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.572491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.572787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.572795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.573083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.573092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.573359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.573369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.573668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.573676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.573956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.573964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.019 qpair failed and we were unable to recover it. 00:35:55.019 [2024-12-06 17:02:43.574284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.019 [2024-12-06 17:02:43.574293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.574579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.574589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.574930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.574939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.575241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.575249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.575442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.575452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.575777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.575786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.576084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.576092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.576422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.576431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.576720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.576729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.577010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.577018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.577333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.577342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.577630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.577639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.577923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.577932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.578213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.578222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.578506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.578514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.578846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.578854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.579148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.579157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.579466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.579475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.579756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.579765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.580053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.580061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.580386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.580395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.580680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.580688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.580974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.580982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.581301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.581310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.581610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.581620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.581903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.581912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.582191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.582200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.582505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.582513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.582797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.020 [2024-12-06 17:02:43.582806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.020 qpair failed and we were unable to recover it. 00:35:55.020 [2024-12-06 17:02:43.583092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.583105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.583412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.583421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.583716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.583725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.584046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.584055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.584413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.584424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.584705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.584714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.585000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.585009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.585309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.585318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.585632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.585641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.585925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.585934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.586115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.586124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.586810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.586830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.587129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.587140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.587457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.587466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.587786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.587794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.588086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.588095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.588375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.588385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.588709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.588718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.589011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.589019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.589320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.589328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.589499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.589508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.589832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.589841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.590133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.590143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.590470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.590479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.590787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.590797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.591108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.591117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.591417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.591425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.591703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.591711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.591988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.021 [2024-12-06 17:02:43.591996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.021 qpair failed and we were unable to recover it. 00:35:55.021 [2024-12-06 17:02:43.592279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.592288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.592595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.592604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.592889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.592897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.593194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.593204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.593523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.593532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.593859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.593868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.594173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.594184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.594485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.594495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.594771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.594779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.595057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.595065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.595364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.595373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.595657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.595666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.595953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.595961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.596307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.596316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.596602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.596611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.596913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.596921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.597227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.597236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.597554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.597563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.597853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.597862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.598165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.598174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.598493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.598502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.598787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.598795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.599083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.599092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.599293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.599302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.599589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.599597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.022 qpair failed and we were unable to recover it. 00:35:55.022 [2024-12-06 17:02:43.599882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.022 [2024-12-06 17:02:43.599891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.600173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.600182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.600492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.600501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.600675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.600684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.600984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.600992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.601310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.601318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.601598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.601607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.601907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.601915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.602214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.602224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.602526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.602534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.602835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.602844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.603126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.603136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.603463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.603471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.603760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.603769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.603951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.603960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.604261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.604270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.604582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.604591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.604919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.604928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.605221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.605230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.605525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.605533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.605820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.605829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.606139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.606149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.606344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.606354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.606642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.606651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.606974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.606983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.607268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.607276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.607563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.607571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.607863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.023 [2024-12-06 17:02:43.607871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.023 qpair failed and we were unable to recover it. 00:35:55.023 [2024-12-06 17:02:43.608186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.608196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.608492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.608501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.608782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.608791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.609113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.609122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.609394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.609402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.609741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.609749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.610029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.610038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.610332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.610341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.610629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.610637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.610923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.610931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.611225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.611234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.611553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.611562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.611853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.611862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.612159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.612168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.612366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.612375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.612682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.612690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.613010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.613018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.613301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.613310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.613629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.613638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.613817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.613825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.614104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.614113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.614431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.614440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.614734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.614743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.615018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.615026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.615223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.615233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.615557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.615566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.615869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.615877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.616164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.616173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.024 [2024-12-06 17:02:43.616363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.024 [2024-12-06 17:02:43.616372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.024 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.616696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.616705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.616992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.617000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.617310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.617319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.617593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.617602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.617887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.617897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.618177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.618186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.618466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.618474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.618800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.618808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.619092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.619103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.619377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.619386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.619664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.619673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.619962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.619971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.620274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.620283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.620555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.620563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.620725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.620734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.621034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.621043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.621343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.621352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.621616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.621625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.621899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.621908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.622104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.622114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.622413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.622422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.622705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.622714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.623115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.623124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.623454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.623462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.623784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.623792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.624081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.624089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.624332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.624341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.624629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.624638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.025 [2024-12-06 17:02:43.624987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.025 [2024-12-06 17:02:43.624996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.025 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.625306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.625315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.625652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.625660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.625953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.625962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.626256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.626264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.626555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.626564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.626761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.626770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.627077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.627087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.627378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.627387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.627670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.627679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.627949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.627958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.628251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.628260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.628555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.628563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.628849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.628858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.629133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.629142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.629450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.629459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.629741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.629751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.630036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.630046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.630405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.630414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.630704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.630713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.631035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.631043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.631333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.631341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.631648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.631658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.631981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.631990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.632299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.632308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.632613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.632622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.632932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.632940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.633247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.633256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.633551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.026 [2024-12-06 17:02:43.633560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.026 qpair failed and we were unable to recover it. 00:35:55.026 [2024-12-06 17:02:43.633841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.633850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.634167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.634175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.634507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.634515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.634821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.634829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.635113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.635122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.635421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.635430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.635717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.635726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.636013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.636022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.636212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.636221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.636525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.636534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.636723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.636732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.637031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.637040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.637370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.637379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.637671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.637680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.638006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.638015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.638369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.638378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.638655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.638663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.638940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.638949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.639114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.639124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.639447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.639456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.639776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.639785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.640067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.640076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.640404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.640412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.640700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.640709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.640991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.641000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.641308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.641317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.641638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.641646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.641932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.641943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.027 [2024-12-06 17:02:43.642244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.027 [2024-12-06 17:02:43.642253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.027 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.642563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.642572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.642860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.642868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.643156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.643164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.643442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.643450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.643758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.643766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.644051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.644059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.644377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.644386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.644666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.644675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.645004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.645013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.645305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.645314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.645614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.645623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.645921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.645930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.646223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.646231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.646538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.646547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.646865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.646873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.647175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.647185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.647472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.647480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.647640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.647649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.647971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.647980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.648265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.648274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.648579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.648587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.648775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.648783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.648938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.648949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.649218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.649227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.649495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.649504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.649785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.028 [2024-12-06 17:02:43.649793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.028 qpair failed and we were unable to recover it. 00:35:55.028 [2024-12-06 17:02:43.650080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.650089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.650375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.650384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.650669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.650678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.650977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.650986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.651299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.651307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.651637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.651646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.651936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.651944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.652104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.652113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.652415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.652424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.652732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.652741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.653034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.653042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.653361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.653370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.653528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.653538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.653816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.653825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.654111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.654120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.654431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.654440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.654733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.654742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.654925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.654934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.655242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.655251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.655542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.655550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.655833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.655842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.656121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.656130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.656311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.656320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.656607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.656615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.656938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.656947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.657237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.029 [2024-12-06 17:02:43.657246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.029 qpair failed and we were unable to recover it. 00:35:55.029 [2024-12-06 17:02:43.657540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.657549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.657849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.657858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.658141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.658150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.658328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.658337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.658638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.658647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.658948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.658957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.659244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.659253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.659558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.659566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.659849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.659858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.660142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.660151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.660431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.660439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.660714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.660723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.661013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.661021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.661214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.661223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.661510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.661519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.661853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.661861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.662172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.662181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.662522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.662531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.662868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.662877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.663169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.663178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.663493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.663501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.663784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.663792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.663988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.663998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.664690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.664708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.665434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.665450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.665770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.665780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.666066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.666075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.666376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.666386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.030 [2024-12-06 17:02:43.666669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.030 [2024-12-06 17:02:43.666677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.030 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.666971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.666980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.667280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.667290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.667593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.667601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.667906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.667915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.668200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.668209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.668389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.668398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.668725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.668734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.669021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.669030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.669398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.669407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.031 [2024-12-06 17:02:43.669581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.031 [2024-12-06 17:02:43.669590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.031 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.670127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.670145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.670407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.670417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.670744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.670753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.670928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.670937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.671207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.671217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.672017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.672034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.672340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.672350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.672641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.672651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.672926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.672936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.673241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.673249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.673567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.673576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.673768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.305 [2024-12-06 17:02:43.673776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.305 qpair failed and we were unable to recover it. 00:35:55.305 [2024-12-06 17:02:43.674154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.674163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.674485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.674493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.674860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.674871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.675226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.675235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.675529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.675538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.675702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.675710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.676020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.676029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.676337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.676346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.676631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.676640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.676800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.676809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.677166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.677175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.677498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.677508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.677797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.677805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.678072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.678080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.678393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.678402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.678702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.678712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.678875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.678885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.679193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.679202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.679512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.679521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.679740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.679749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.680069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.680078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.680273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.680283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.680594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.680602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.680914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.680923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.681235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.681245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.681469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.681479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.681775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.681784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.681956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.681964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.682267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.306 [2024-12-06 17:02:43.682276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.306 qpair failed and we were unable to recover it. 00:35:55.306 [2024-12-06 17:02:43.682573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.682582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.682762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.682771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.682959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.682968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.683150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.683160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.683486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.683495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.683563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.683571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.683930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.683940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.684260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.684271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.684639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.684648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.684950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.684959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.685301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.685310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.685667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.685676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.685982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.685991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.686233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.686244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.686564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.686573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.686903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.686911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.687160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.687169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.687486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.687495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.687729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.687738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.688044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.688053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.688347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.688357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.688648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.688657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.688949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.688957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.689277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.689287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.689647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.689656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.689943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.689952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.690322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.690335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.690722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.690732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.690998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.307 [2024-12-06 17:02:43.691007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.307 qpair failed and we were unable to recover it. 00:35:55.307 [2024-12-06 17:02:43.691310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.691320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.691645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.691656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.691980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.691990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.692294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.692304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.692621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.692630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.692938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.692947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.693259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.693269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.693549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.693558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.693863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.693872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.694065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.694075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.694268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.694278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.694597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.694606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.694909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.694920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.695329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.695338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.695629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.695638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.695951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.695960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.696255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.696265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.696602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.696612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.696922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.696931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.697235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.697247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.697424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.697436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.697737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.697746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.698045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.698054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.698384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.698394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.698557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.698568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.698869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.698878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.699226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.699235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.699519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.699528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.699794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.699803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.700142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.308 [2024-12-06 17:02:43.700151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.308 qpair failed and we were unable to recover it. 00:35:55.308 [2024-12-06 17:02:43.700454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.700463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.700762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.700772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.701086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.701095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.701435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.701444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.701582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.701591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.701870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.701878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.702177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.702186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.702512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.702521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.702825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.702834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.703137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.703148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.703352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.703361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.703703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.703713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.704031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.704041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.704371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.704381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.704687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.704697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.705015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.705024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.705330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.705340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.705703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.705712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.705993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.706002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.706381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.706390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.706693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.706702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.706995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.707004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.707056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.707064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.707395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.707405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.707566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.707575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.707904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.707913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.708108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.708118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.708353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.309 [2024-12-06 17:02:43.708363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.309 qpair failed and we were unable to recover it. 00:35:55.309 [2024-12-06 17:02:43.708694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.708704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.709017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.709027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.709351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.709361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.709679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.709689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.709966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.709975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.710269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.710279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.710584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.710594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.710886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.710895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.711207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.711216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.711494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.711502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.711830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.711840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.712023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.712032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.712351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.712361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.712673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.712684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.712968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.712977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.713216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.713228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.713555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.713565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.713872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.713882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.714035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.714044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.714261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.714270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.714597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.714606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.714960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.714969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.715126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.715136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.715461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.715470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.715772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.715781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.716108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.716117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.716428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.716437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.716749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.716758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.717066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.717075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.310 qpair failed and we were unable to recover it. 00:35:55.310 [2024-12-06 17:02:43.717388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.310 [2024-12-06 17:02:43.717397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.717698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.717707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.717979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.717989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.718285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.718294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.718593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.718602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.718861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.718869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.719057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.719066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.719374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.719383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.719660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.719669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.719971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.719979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.720264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.720273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.720586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.720595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.720885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.720893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.721186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.721195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.721414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.721422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.721750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.721759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.721973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.721982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.722283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.722293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.722611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.722620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.722937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.722945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.723281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.723290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.723577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.723586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.723887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.723896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.311 [2024-12-06 17:02:43.724222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.311 [2024-12-06 17:02:43.724230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.311 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.724416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.724426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.724755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.724764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.725044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.725053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.725303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.725313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.725598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.725607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.725911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.725920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.726113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.726123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.726363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.726372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.727325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.727344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.727522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.727532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.727853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.727862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.728218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.728228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.728531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.728540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.728855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.728864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.729164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.729173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.729522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.729530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.729838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.729847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.730129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.730138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.730431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.730440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.730599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.730609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.730908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.730917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.731230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.731238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.731577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.731586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.731865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.731874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.732170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.732179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.732493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.732503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.732801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.732809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.733201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.733209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.733486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.733495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.312 qpair failed and we were unable to recover it. 00:35:55.312 [2024-12-06 17:02:43.733806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.312 [2024-12-06 17:02:43.733815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.734165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.734174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.734464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.734473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.734769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.734778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.735074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.735084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.735386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.735396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.735655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.735664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.735949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.735958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.736284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.736293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.736605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.736614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.736908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.736917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.737252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.737261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.737539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.737548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.737738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.737748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.738065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.738074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.738351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.738360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.738662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.738671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.739001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.739010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.739301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.739311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.739598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.739606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.739895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.739903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.740094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.740106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.740395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.740404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.740690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.740698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.740993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.741002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.741206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.741215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.741493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.741501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.741810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.741818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.741973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.313 [2024-12-06 17:02:43.741983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.313 qpair failed and we were unable to recover it. 00:35:55.313 [2024-12-06 17:02:43.742289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.742298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.742588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.742597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.742878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.742887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.743185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.743193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.743487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.743496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.743793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.743801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.744082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.744091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.744396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.744405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.744701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.744710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.744998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.745007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.745307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.745317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.745594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.745603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.745896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.745904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.746082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.746090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.746367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.746376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.746657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.746667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.747029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.747038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.747340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.747349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.747634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.747642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.747920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.747929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.748212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.748221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.748508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.748517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.748801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.748809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.749079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.749087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.749382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.749391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.749674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.749682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.749984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.749993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.750267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.750277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.314 [2024-12-06 17:02:43.750600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.314 [2024-12-06 17:02:43.750609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.314 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.750770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.750780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.751083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.751092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.751719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.751728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.752017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.752026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.752324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.752333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.752493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.752502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.752817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.752826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.753116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.753125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.753417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.753426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.753770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.753778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.754054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.754063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.754370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.754379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.754666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.754675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.754956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.754965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.755265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.755274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.755587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.755596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.755918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.755926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.756184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.756193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.756532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.756540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.756860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.756868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.757077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.757085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.757388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.757396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.757682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.757690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.758028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.758036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.758335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.758344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.758653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.758663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.758959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.758968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.315 [2024-12-06 17:02:43.759252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.315 [2024-12-06 17:02:43.759261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.315 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.759569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.759577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.759846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.759854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.760167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.760176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.760501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.760509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.760799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.760808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.761128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.761137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.761414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.761423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.761760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.761768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.762108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.762117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.762390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.762398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.762688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.762696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.762981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.762990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.763169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.763178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.763482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.763490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.763776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.763782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.764079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.764085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.764405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.764412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.764714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.764722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.765018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.765027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.765231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.765240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.765563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.765571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.765767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.765777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.766108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.766118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.766406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.766415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.766694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.766703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.766980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.766989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.767270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.767280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.767602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.767611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.767970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.316 [2024-12-06 17:02:43.767979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.316 qpair failed and we were unable to recover it. 00:35:55.316 [2024-12-06 17:02:43.768297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.768307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.768531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.768539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.768856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.768865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.769167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.769177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.769494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.769503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.769820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.769829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.770117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.770126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.770449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.770458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.770628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.770638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.770985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.770995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.771300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.771309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.771582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.771591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.771896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.771905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.772107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.772119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.772331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.772339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.772593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.772602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.772829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.772839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.773030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.773039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.773346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.773356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.773676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.773686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.773985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.773994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.774220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.774229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 [2024-12-06 17:02:43.774393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.317 [2024-12-06 17:02:43.774405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e0000b90 with addr=10.0.0.2, port=4420 00:35:55.317 qpair failed and we were unable to recover it. 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Read completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.317 Write completed with error (sct=0, sc=8) 00:35:55.317 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Read completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 Write completed with error (sct=0, sc=8) 00:35:55.318 starting I/O failed 00:35:55.318 [2024-12-06 17:02:43.774700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:55.318 [2024-12-06 17:02:43.775070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.775087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.775436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.775449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.775819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.775831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.776178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.776190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.776373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.776384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.776683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.776700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.777012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.777024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.777329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.777341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.777657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.777668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.777981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.777993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.778364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.778376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.778654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.778666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.778969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.778981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.779196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.779208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.779488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.779500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.779808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.779820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.779991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.780006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.780306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.780318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.780636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.780647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.780836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.780848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.781139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.318 [2024-12-06 17:02:43.781150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.318 qpair failed and we were unable to recover it. 00:35:55.318 [2024-12-06 17:02:43.781399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.781410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.781702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.781714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.782014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.782025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.782320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.782332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.782610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.782622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.782922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.782933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.783213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.783224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.783546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.783557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.783837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.783849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.784132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.784145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.784335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.784347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.784517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.784835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.784846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.785171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.785183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.785490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.785501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.785834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.785846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.786129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.786141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.786451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.786462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.786659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.786671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.786950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.786962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.787311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.787323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.787628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.787640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.787878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.787889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.788196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.788208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.788482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.788493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.788701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.788713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.789068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.789080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.789400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.789412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.789709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.789721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.319 [2024-12-06 17:02:43.790009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.319 [2024-12-06 17:02:43.790020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.319 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.790203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.790215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.790551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.790563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.790831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.790842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.791129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.791141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.791453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.791464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.791756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.791768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.792056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.792068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.792263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.792276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.792584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.792595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.792887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.792898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.793207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.793219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.793494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.793506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.793691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.793702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.794001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.794013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.794309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.794322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.794598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.794610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.794880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.794891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.795185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.795197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.795477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.795488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.795782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.795794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.796087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.796098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.796395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.796407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.796800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.796812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.797118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.797130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.797299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.797312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.797591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.797602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.797881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.797893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.798174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.798185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.798510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.320 [2024-12-06 17:02:43.798521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.320 qpair failed and we were unable to recover it. 00:35:55.320 [2024-12-06 17:02:43.798837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.798849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.799125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.799136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.799425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.799436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.799705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.799716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.800005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.800016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.800346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.800358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.800716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.800727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.801005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.801016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.801320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.801332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.801631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.801643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.801920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.801932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.802233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.802245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.802481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.802492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.802796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.802807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.803111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.803123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.803442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.803453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.803781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.803792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.804110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.804122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.804430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.804442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.804714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.804725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.805018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.805032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.805346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.805358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.805678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.805689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.805993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.806004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.806290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.806302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.806567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.806579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.806871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.806883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.807161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.807173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.321 qpair failed and we were unable to recover it. 00:35:55.321 [2024-12-06 17:02:43.807453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.321 [2024-12-06 17:02:43.807465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.807745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.807756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.808075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.808087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.808400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.808412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.808701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.808713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.809020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.809031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.809341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.809353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.809680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.809692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.809973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.809984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.810165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.810178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.810552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.810563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.810841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.810853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.811135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.811147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.811451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.811462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.811761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.811772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.812041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.812053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.812350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.812362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.812665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.812677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.813001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.813012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.813327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.813341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.813660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.813671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.814009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.814021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.814316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.814328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.814513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.814525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.814849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.814861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.815160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.815172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.815483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.815495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.815791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.815802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.816076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.322 [2024-12-06 17:02:43.816088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.322 qpair failed and we were unable to recover it. 00:35:55.322 [2024-12-06 17:02:43.816396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.816408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.816704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.816716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.817009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.817020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.817306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.817318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.817599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.817611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.817935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.817946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.818218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.818230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.818534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.818546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.818813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.818824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.819145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.819157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.819457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.819468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.819800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.819811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.820081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.820092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.820410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.820422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.820720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.820731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.821021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.821033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.821340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.821352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.821538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.821552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.821827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.821839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.822120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.822132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.822409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.822420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.822706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.822717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.823065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.823076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.823405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.823417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.323 [2024-12-06 17:02:43.823694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.323 [2024-12-06 17:02:43.823705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.323 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.823866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.823879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.824190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.824202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.824401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.824413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.824609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.824620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.824920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.824932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.825108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.825121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.825413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.825424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.825750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.825761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.826044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.826056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.826348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.826360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.826640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.826652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.826922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.826933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.827219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.827231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.827530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.827541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.827826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.827837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.828142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.828154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.828455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.828467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.828736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.828747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.829041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.829052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.829340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.829352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.829681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.829692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.829965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.829976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.830293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.830305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.830577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.830588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.830880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.830892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.831165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.831177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.831455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.831467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.831796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.831807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.324 qpair failed and we were unable to recover it. 00:35:55.324 [2024-12-06 17:02:43.832082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.324 [2024-12-06 17:02:43.832094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.832274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.832286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.832580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.832592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.832862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.832873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.833198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.833210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.833525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.833537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.833873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.833884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.834159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.834171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.834467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.834478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.834776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.834788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.835066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.835078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.835356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.835368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.835538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.835550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.835722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.835733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.836032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.836044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.836364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.836376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.836655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.836666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.836944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.836956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.837274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.837287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.837580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.837592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.837879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.837891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.838161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.838172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.838483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.838495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.838761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.838772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.839129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.839141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.839462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.839474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.839753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.839764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.840032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.840044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.840337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.840350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.325 qpair failed and we were unable to recover it. 00:35:55.325 [2024-12-06 17:02:43.840644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.325 [2024-12-06 17:02:43.840655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.840995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.841007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.841282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.841294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.841591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.841604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.841885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.841896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.842174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.842186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.842451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.842462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.842766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.842778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.842970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.842981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.843271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.843283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.843559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.843571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.843859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.843870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.844174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.844186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.844508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.844519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.844815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.844826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.845099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.845114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.845396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.845408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.845697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.845709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.846039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.846050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.846337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.846349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.846656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.846668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.846949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.846960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.847229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.847241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.847553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.847564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.847864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.847875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.848232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.848244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.848519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.848531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.848855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.848866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.849136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.326 [2024-12-06 17:02:43.849148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.326 qpair failed and we were unable to recover it. 00:35:55.326 [2024-12-06 17:02:43.849469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.849480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.849761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.849774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.850082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.850093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.850400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.850412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.850741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.850752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.851055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.851067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.851277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.851289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.851557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.851568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.851896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.851908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.852211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.852223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.852545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.852556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.852853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.852865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.853170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.853182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.853458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.853469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.853774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.853785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.854067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.854078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.854364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.854375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.854680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.854692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.854978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.854989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.855311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.855323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.855603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.855615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.855771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.855784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.856084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.856095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.856393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.856405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.856691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.856703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.856981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.856993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.857315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.857327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.857630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.857641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.857934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.327 [2024-12-06 17:02:43.857946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.327 qpair failed and we were unable to recover it. 00:35:55.327 [2024-12-06 17:02:43.858292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.858303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.858613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.858625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.858821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.858833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.859132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.859144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.859490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.859502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.859703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.859715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.859982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.859994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.860305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.860316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.860602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.860613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.860910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.860921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.861222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.861234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.861573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.861584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.861899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.861911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.862215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.862227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.862517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.862528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.862845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.862856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.863150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.863162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.863484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.863496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.863769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.863781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.864072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.864083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.864396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.864408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.864741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.864752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.865026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.865038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.865337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.865349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.865630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.865641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.865967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.865979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.866312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.866324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.866657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.866668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.866856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.866869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.867126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.328 [2024-12-06 17:02:43.867138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.328 qpair failed and we were unable to recover it. 00:35:55.328 [2024-12-06 17:02:43.867416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.867427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.867730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.867742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.868035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.868047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.868346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.868358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.868662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.868673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.868962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.868974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.869160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.869172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.869475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.869486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.869645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.869657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.869943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.869955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.870250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.870265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.870566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.870578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.870858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.870870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.871168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.871180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.871485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.871496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.871775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.871786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.872068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.872079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.872376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.872388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.872670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.872682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.872859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.872870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.873194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.873206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.873507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.873518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.873831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.873842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.329 [2024-12-06 17:02:43.874149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.329 qpair failed and we were unable to recover it. 00:35:55.329 [2024-12-06 17:02:43.874463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.874475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.874774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.874785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.875089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.875104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.875384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.875396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.875675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.875686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.875957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.875968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.876283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.876295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.876612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.876624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.876913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.876925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.877257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.877269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.877573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.877584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.877872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.877884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.878156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.878168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.878467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.878480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.878755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.878766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.879056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.879068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.879360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.879372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.879700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.879711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.880010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.880022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.880329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.880341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.880627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.880638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.880956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.880967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.881244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.881256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.881551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.881563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.881888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.881900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.882185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.882197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.882489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.882501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.882841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.330 [2024-12-06 17:02:43.882852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.330 qpair failed and we were unable to recover it. 00:35:55.330 [2024-12-06 17:02:43.883125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.883137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.883427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.883439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.883711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.883723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.884018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.884030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.884394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.884406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.884727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.884739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.885006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.885018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.885302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.885314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.885602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.885613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.885891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.885903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.886079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.886090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.886268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.886281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.886495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.886509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.886825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.886838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.887162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.887174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.887447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.887458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.887723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.887734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.888031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.888042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.888345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.888357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.888679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.888691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.888968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.888979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.889272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.889284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.889565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.889576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.889894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.889906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.890185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.890196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.890498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.890510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.890832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.890843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.891019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.331 [2024-12-06 17:02:43.891032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.331 qpair failed and we were unable to recover it. 00:35:55.331 [2024-12-06 17:02:43.891363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.891375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.891712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.891723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.892040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.892052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.892338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.892350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.892619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.892630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.892906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.892917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.893249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.893261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.893564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.893575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.893856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.893867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.894208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.894220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.894527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.894538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.894818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.894829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.895133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.895145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.895440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.895452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.895747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.895758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.896038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.896049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.896352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.896364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.896693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.896704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.896981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.896992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.897169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.897181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.897493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.897504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.897805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.897817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.898090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.898104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.898394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.898406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.898738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.898749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.899045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.899056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.899387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.899399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.899687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.899698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.899889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.332 [2024-12-06 17:02:43.899901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.332 qpair failed and we were unable to recover it. 00:35:55.332 [2024-12-06 17:02:43.900212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.900224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.900565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.900577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.900746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.900758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.901053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.901064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.901360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.901372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.901665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.901676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.901963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.901974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.902276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.902288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.902593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.902605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.902903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.902914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.903221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.903233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.903508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.903520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.903801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.903813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.904091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.904112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.904448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.904459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.904731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.904742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.905022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.905034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.905344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.905356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.905639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.905650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.905925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.905936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.906203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.906216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.906499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.906510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.906813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.906825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.907140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.907154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.907460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.907472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.907752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.907764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.908044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.908056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.908350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.908362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.908661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.908672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.333 qpair failed and we were unable to recover it. 00:35:55.333 [2024-12-06 17:02:43.908944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.333 [2024-12-06 17:02:43.908956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.909253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.909265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.909558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.909569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.909851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.909862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.910147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.910159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.910439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.910451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.910726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.910738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.910918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.910931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.911257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.911269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.911602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.911613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.911799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.911812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.912107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.912120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.912381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.912392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.912663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.912674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.912968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.912979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.913302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.913313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.913597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.913926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.913937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.914127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.914139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.914423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.914434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.914725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.914736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.915019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.915032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.915338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.915349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.915629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.915640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.915945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.915956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.916249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.916260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.916530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.916541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.916834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.916844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.917125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.917136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.917426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.334 [2024-12-06 17:02:43.917437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.334 qpair failed and we were unable to recover it. 00:35:55.334 [2024-12-06 17:02:43.917719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.917730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.917885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.917896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.918206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.918218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.918499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.918510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.918838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.918849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.919157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.919169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.919478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.919489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.919768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.919779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.920055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.920065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.920348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.920359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.920631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.920642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.920927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.920938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.921235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.921247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.921559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.921571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.921852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.922132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.922143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.922414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.922425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.922705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.922716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.922999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.923009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.923327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.923338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.923638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.923649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.923948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.923959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.924265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.924276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.924554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.335 [2024-12-06 17:02:43.924564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.335 qpair failed and we were unable to recover it. 00:35:55.335 [2024-12-06 17:02:43.924855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.924865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.925152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.925164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.925489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.925500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.925785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.925796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.926067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.926078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.926359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.926370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.926689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.926700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.926860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.926872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.927184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.927196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.927502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.927513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.927791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.927802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.928131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.928143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.928421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.928432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.928728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.928740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.929032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.929043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.929342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.929354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.929687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.929698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.929986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.929997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.930248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.930259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.930568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.930579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.930912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.930923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.931214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.931225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.931514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.931525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.931827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.931838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.932140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.932151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.932486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.932497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.932806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.932816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.933025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.933037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.933427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.933438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.933774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.336 [2024-12-06 17:02:43.933785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.336 qpair failed and we were unable to recover it. 00:35:55.336 [2024-12-06 17:02:43.934147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.934159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.934479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.934490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.934774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.934785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.934958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.934969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.935281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.935293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.935603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.935615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.935916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.935928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.936119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.936131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.936423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.936434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.936744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.936755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.937036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.937047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.937218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.937229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.937430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.937441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.937727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.937737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.937934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.937945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.938289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.938301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.938492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.938504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.938833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.938844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.939010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.939021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.939200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.939212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.939481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.939492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.939686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.939698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.940019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.940030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.940216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.940228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.940550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.940561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.940897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.940909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.941187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.941199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.941530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.941541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.941730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.337 [2024-12-06 17:02:43.941743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.337 qpair failed and we were unable to recover it. 00:35:55.337 [2024-12-06 17:02:43.942017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.942029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.942426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.942437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.942613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.942625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.942917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.942930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.943249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.943261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.943577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.943588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.943909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.943920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.944221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.944232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.944540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.944551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.944916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.944928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.945198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.945209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.945475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.945486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.945790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.945802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.946098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.946114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.946441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.946452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.946718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.946729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.947045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.947056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.947371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.947382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.947675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.947686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.948024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.948035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.948352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.948364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.948537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.948548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.948857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.948868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.949041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.949052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.949365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.949377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.949704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.949715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.949995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.950007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.950326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.950337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.338 [2024-12-06 17:02:43.950611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.338 [2024-12-06 17:02:43.950622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.338 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.950953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.950964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.951272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.951286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.951464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.951476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.951798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.951811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.952007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.952018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.952303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.952314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.952668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.952679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.953008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.953019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.953314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.953326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.953664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.953675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.953953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.953964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.954280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.954291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.954608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.954619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.954897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.954908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.955184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.955196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.955466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.955478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.955753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.955764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.956039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.956050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.956364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.956376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.956672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.956683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.956954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.956965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.957248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.957562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.957573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.957878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.957889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.958164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.958176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.958468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.958479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.958765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.958776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.959059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.339 [2024-12-06 17:02:43.959070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.339 qpair failed and we were unable to recover it. 00:35:55.339 [2024-12-06 17:02:43.959375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.959387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.959673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.959684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.959962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.959973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.960248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.960259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.960559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.960570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.960870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.960882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.961177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.961188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.961477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.961488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.961757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.961768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.961925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.961937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.962235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.962247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.962516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.962527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.962826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.962837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.963115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.963127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.963431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.963443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.963713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.963724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.964013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.964024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.964214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.964226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.964523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.964534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.964688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.964700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.964998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.965009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.965216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.965227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.965564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.965576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.965856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.965868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.966214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.966225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.966565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.966576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.966894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.966906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.967190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.967201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.967482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.967494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.340 qpair failed and we were unable to recover it. 00:35:55.340 [2024-12-06 17:02:43.967785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.340 [2024-12-06 17:02:43.967796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.967978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.967989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.968274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.968285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.968568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.968579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.968910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.968921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.969255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.969266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.969464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.969475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.969636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.969648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.969927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.969938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.970240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.970252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.970688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.970699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.971002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.971013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.971310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.971324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.971610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.971621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.971914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.971924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.972232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.972244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.972526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.972537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.972712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.972724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.972923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.972934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.973247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.973258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.973436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.973448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.973748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.973759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.974092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.974106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.974415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.974426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.974720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.974731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.975054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.341 [2024-12-06 17:02:43.975065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.341 qpair failed and we were unable to recover it. 00:35:55.341 [2024-12-06 17:02:43.975246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.975258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.975531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.975542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.975829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.975840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.976147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.976159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.976456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.976467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.976754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.976765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.977036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.977047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.977345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.977356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.977659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.977670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.977950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.977962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.978258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.978269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.978574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.978584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.978859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.978870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.979211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.979224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.979537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.979549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.979831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.979842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.980113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.980124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.980500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.980511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.980810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.980820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.981142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.981153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.981454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.981465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.981740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.981751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.982062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.982074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.982348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.982359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.982625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.982636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.982989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.342 [2024-12-06 17:02:43.983000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.342 qpair failed and we were unable to recover it. 00:35:55.342 [2024-12-06 17:02:43.983284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.983296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.983581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.983593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.983875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.983886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.984051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.984063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.984369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.984381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.984673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.984683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.984952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.984964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.985244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.985255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.985524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.985535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.617 [2024-12-06 17:02:43.985802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.617 [2024-12-06 17:02:43.985813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.617 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.986134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.986146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.986434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.986446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.986751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.986762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.987110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.987121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.987427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.987438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.987715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.987726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.988051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.988062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.988347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.988359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.988657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.988668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.988989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.989000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.989297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.989308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.989476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.989488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.989761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.989772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.990070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.990081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.990404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.990416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.990686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.990697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.990968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.990979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.991269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.991281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.991558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.991570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.991892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.991903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.992203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.992215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.992502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.992514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.992784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.992795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.993087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.993098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.993377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.993388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.993667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.993678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.993950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.993961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.994284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.994296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.994625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.994636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.994918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.994929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.995210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.995221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.995402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.995413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.995708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.995719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.996020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.996031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.996338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.996350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.996681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.996692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.996970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.996981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.618 [2024-12-06 17:02:43.997272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.618 [2024-12-06 17:02:43.997284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.618 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.997571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.997581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.997926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.997937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.998251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.998263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.998564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.998576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.998856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.998867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.999194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.999205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.999541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.999552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:43.999858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:43.999870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.000143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.000154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.000454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.000465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.000763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.000773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.001055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.001066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.001349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.001361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.001678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.001689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.001875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.001885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.002209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.002221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.002494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.002505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.002670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.002681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.002984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.002995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.003308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.003319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.003616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.003626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.003907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.003918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.004124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.004135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.004450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.004461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.004761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.004771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.005056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.005067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.005371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.005383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.005656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.005667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.005956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.005967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.006247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.006259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.006535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.006546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.006824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.006835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.007143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.007154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.007421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.007432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.007612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.007626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.007936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.007947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.008115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.008127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.008413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.619 [2024-12-06 17:02:44.008424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.619 qpair failed and we were unable to recover it. 00:35:55.619 [2024-12-06 17:02:44.008761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.008771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.009064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.009076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.009360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.009371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.009697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.009707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.010014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.010322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.010333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.010644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.010655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.010969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.010980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.011280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.011291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.011584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.011595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.011863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.011875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.012041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.012053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.012334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.012346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.012640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.012651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.012943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.012954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.013141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.013152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.013566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.013577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.013897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.013908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.014181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.014193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.014479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.014490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.014796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.014807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.015122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.015134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.015413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.015424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.015695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.015711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.016015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.016026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.016337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.016348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.016716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.016727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.017021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.017032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.017336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.017348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.017666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.017677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.017957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.017967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.018249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.018260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.018589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.018600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.018921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.018932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.019231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.019243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.019512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.019523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.019841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.019851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.020126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.620 [2024-12-06 17:02:44.020138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.620 qpair failed and we were unable to recover it. 00:35:55.620 [2024-12-06 17:02:44.020435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.020446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.020756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.020767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.021080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.021091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.021386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.021397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.021680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.021691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.021974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.021985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.022259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.022271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.022560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.022570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.022804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.022814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.023107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.023119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.023422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.023433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.023711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.023721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.023988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.023999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.024304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.024316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.024609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.024620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.024957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.024968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.025234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.025246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.025579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.025590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.025886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.025897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.026157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.026168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.026470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.026481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.026808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.026819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.027119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.027131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.027436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.027448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.027777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.027787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.028058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.028069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.028378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.028390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.028690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.028701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.028971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.028981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.029267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.029278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.029445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.029458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.029745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.029756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.030076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.030087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.030420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.030432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.030710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.030720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.030906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.030919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.031251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.031262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.031555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.031566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.621 qpair failed and we were unable to recover it. 00:35:55.621 [2024-12-06 17:02:44.031884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.621 [2024-12-06 17:02:44.031895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.032080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.032091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.032422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.032434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.032721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.032732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.033050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.033338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.033349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.033630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.033641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.033913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.033924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.034138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.034150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.034483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.034494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.034702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.035054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.035064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.035253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.035265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.035576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.035587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.035864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.035876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.036219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.036232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.036499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.036511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.036802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.036813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.037097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.037112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.037422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.037433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.037706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.037717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.038003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.038014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.038341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.038353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.038544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.038555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.038848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.038859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.039160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.039172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.039540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.039870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.039881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.040159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.040171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.040345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.040358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.040642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.040653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.040984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.040995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.041282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.041293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.622 [2024-12-06 17:02:44.041615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.622 [2024-12-06 17:02:44.041626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.622 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.041900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.041910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.042199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.042210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.042396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.042409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.042716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.042727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.043027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.043038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.043327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.043339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.043535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.043546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.043877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.043888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.044193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.044206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.044499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.044510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.044812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.044823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.045141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.045160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.045479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.045491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.045814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.045825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.046120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.046131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.046422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.046433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.046711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.046722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.047023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.047033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.047344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.047356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.047674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.047684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.047988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.047999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.048307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.048318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.048596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.048607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.048800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.048812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.049086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.049097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.049379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.049390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.049665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.049676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.049973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.049984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.050169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.050181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.050477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.050488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.050751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.050762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.051037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.051048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.051336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.051347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.051634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.051645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.051950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.051961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.052260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.052271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.052576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.052587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.052871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.052882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.623 qpair failed and we were unable to recover it. 00:35:55.623 [2024-12-06 17:02:44.053168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.623 [2024-12-06 17:02:44.053179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.053499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.053510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.053784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.053795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.054124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.054135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.054450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.054461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.054791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.054802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.055076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.055087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.055377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.055389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.055664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.055675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.055848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.055860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.056180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.056192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.056369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.056380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.056686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.056697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.057020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.057031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.057337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.057349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.057545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.057556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.057827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.057838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.058116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.058128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.058416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.058427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.058712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.058723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.059092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.059107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.059379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.059391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.059692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.059703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.059990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.060002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.060303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.060315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.060642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.060653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.060922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.060933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.061112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.061125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.061335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.061346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.061642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.061653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.061940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.061952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.062240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.062252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.062561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.062572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.062849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.062860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.063131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.063142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.063460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.063471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.063745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.063756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.063957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.063968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.064263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.624 [2024-12-06 17:02:44.064276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.624 qpair failed and we were unable to recover it. 00:35:55.624 [2024-12-06 17:02:44.064578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.064589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.064890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.064901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.065091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.065105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.065291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.065303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.065595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.065606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.065888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.065899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.066094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.066113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.066381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.066392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.066693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.066705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.066985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.066996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.067221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.067232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.067535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.067546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.067834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.067845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.068142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.068153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.068443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.068454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.068718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.068729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.069018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.069029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.069337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.069348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.069625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.069636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.069904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.069915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.070198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.070209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.070487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.070497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.070784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.070795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.071062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.071073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.071361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.071372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.071655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.071666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.071986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.071999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.072266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.072277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.072572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.072583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.072855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.072866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.073241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.073252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.073543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.073553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.073852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.073863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.074207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.074219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.074550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.074561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.074885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.074895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.075171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.075183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.075455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.075465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.625 [2024-12-06 17:02:44.075802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.625 [2024-12-06 17:02:44.075813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.625 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.076116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.076128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.076457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.076468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.076774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.076785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.077077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.077088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.077403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.077414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.077720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.077731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.077926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.077937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.078259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.078270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.078601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.078612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.078817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.078829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.079097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.079111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.079445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.079456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.079778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.079789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.080104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.080115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.080408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.080422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.080718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.080729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.080997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.081008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.081318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.081329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.081655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.081666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.081938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.081949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.082223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.082234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.082557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.082568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.082863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.082874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.083202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.083213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.083484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.083495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.083780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.083791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.084071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.084081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.084374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.084385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.084680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.084692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.084977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.084988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.085288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.085300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.085589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.085600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.085873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.085884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.626 [2024-12-06 17:02:44.086164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.626 [2024-12-06 17:02:44.086175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.626 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.086375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.086387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.086676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.086687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.086997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.087007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.087315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.087327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.087626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.087637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.087917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.087928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.088207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.088218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.088531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.088542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.088870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.088881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.089157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.089168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.089466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.089477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.089747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.089758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.090039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.090050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.090350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.090361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.090632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.090643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.090951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.090963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.091232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.091243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.091544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.091555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.091830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.091840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.092144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.092155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.092327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.092338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.092628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.092639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.092950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.092961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.093255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.093266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.093565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.093575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.093864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.093875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.094150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.094162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.094367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.094378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.094610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.094621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.094922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.094933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.095243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.095255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.095568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.095579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.095741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.095754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.096047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.096058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.096312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.096323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.096635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.096646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.096920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.096931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.627 [2024-12-06 17:02:44.097254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.627 [2024-12-06 17:02:44.097265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.627 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.097559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.097570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.097855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.097866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.098134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.098146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.098427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.098437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.098609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.098621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.098900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.098911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.099206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.099217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.099508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.099519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.099802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.099813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.100131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.100142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.100410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.100423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.100695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.100707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.101014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.101025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.101358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.101370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.101679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.101690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.102009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.102020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.102318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.102330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.102656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.102667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.102852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.102864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.103169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.103180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.103484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.103495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.103778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.103789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.104073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.104085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.104382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.104393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.104682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.104693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.104998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.105009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.105304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.105315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.105605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.105616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.105954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.105965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.106250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.106261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.106576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.106587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.106906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.106916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.107197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.107208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.107527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.107538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.107821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.107832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.108129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.108140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.108415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.108426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.108753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.628 [2024-12-06 17:02:44.108766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.628 qpair failed and we were unable to recover it. 00:35:55.628 [2024-12-06 17:02:44.109046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.109057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.109385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.109396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.109558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.109570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.109859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.109870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.110159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.110171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.110463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.110474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.110762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.110774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.111056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.111067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.111361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.111373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.111702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.111713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.111992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.112003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.112167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.112179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.112498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.112510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.112833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.112844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.113139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.113150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.113328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.113340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.113628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.113639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.113954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.113965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.114270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.114281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.114602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.114613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.114885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.114895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.115202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.115213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.115546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.115557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.115889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.115900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.116207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.116218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.116503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.116513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.116784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.116795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.116966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.116976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.117264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.117275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.117591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.117602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.117786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.117797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.118090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.118110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.118435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.118446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.118722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.118733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.119023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.119033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.119327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.119338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.119613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.119625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.119900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.119911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.629 [2024-12-06 17:02:44.120191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.629 [2024-12-06 17:02:44.120203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.629 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.120457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.120468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.120675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.120686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.121025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.121036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.121335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.121347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.121670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.121681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.121953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.121964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.122267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.122279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.122595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.122606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.122898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.122910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.123210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.123221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.123555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.123566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.123747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.123759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.124059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.124071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.124248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.124260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.124549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.124560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.124725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.124737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.125039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.125051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.125267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.125279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.125578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.125589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.125916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.125927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.126249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.126260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.126560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.126571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.126845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.126856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.127124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.127136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.127514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.127525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.127836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.127847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.128169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.128180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.128460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.128471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.128757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.128770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.129047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.129057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.129358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.129370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.129666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.129677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.129958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.129969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.130246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.130257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.130563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.130573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.130908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.130919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.131086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.131099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.131389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.630 [2024-12-06 17:02:44.131401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.630 qpair failed and we were unable to recover it. 00:35:55.630 [2024-12-06 17:02:44.131684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.131695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.131986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.131997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.132188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.132201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.132510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.132522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.132802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.132813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.133111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.133122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.133398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.133410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.133701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.133712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.134033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.134044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.134336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.134663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.134674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.134974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.134985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.135265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.135276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.135615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.135625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.135895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.135907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.136182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.136194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.136487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.136498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.136770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.136784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.137054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.137065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.137388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.137399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.137678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.137689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.137962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.137974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.138245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.138256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.138531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.138542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.138882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.138893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.139180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.139191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.139498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.139509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.139783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.139794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.140067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.140078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.140390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.140402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.140678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.140690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.141004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.141015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.141298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.141309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.141644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.141655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.631 [2024-12-06 17:02:44.141932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.631 [2024-12-06 17:02:44.141943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.631 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.142218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.142229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.142507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.142518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.142791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.142802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.143088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.143102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.143394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.143406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.143680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.143692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.143864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.143875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.144178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.144190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.144523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.144534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.144820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.144832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.145185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.145196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.145493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.145504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.145793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.145804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.146087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.146097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.146396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.146408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.146688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.146699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.146891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.146903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.147195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.147206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.147528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.147539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.147863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.147874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.148146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.148157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.148442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.148453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.148732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.148743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.149070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.149080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.149384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.149679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.149690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.150022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.150032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.150324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.150335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.150629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.150641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.150932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.150942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.151217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.151228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.151575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.151586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.151863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.151874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.152040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.152051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.152368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.152379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.152665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.152676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.152987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.152998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.153313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.153324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.632 qpair failed and we were unable to recover it. 00:35:55.632 [2024-12-06 17:02:44.153603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.632 [2024-12-06 17:02:44.153614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.153947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.153958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.154242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.154254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.154573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.154583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.154900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.154911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.155209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.155220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.155495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.155506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.155796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.155807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.156093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.156106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.156434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.156446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.156760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.156771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.157072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.157082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.157375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.157386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.157685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.157695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.158003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.158014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.158306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.158317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.158594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.158605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.158894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.158905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.159182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.159194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.159584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.159595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.159873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.159884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.160165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.160176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.160470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.160481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.160765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.160777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.161054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.161065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.161379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.161390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.161673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.161684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.162016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.162027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.162336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.162348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.162627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.162638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.162926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.162936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.163248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.163259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.163561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.163572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.163933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.163944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.164265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.164277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.164552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.164564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.164849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.164860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.165141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.165152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.633 [2024-12-06 17:02:44.165451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.633 [2024-12-06 17:02:44.165461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.633 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.165748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.165760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.165940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.165951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.166239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.166251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.166573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.166584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.166912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.166923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.167225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.167236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.167527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.167538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.167791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.167802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.168094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.168108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.168444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.168455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.168769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.168780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.169083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.169094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.169412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.169423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.169699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.169710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.170048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.170059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.170245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.170257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.170520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.170532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.170833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.170843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.171121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.171133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.171462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.171473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.171776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.171786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.172057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.172068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.172374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.172386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.172713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.172724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.173015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.173026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.173382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.173393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.173719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.173730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.174010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.174023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.174411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.174422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.174721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.174734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.175060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.175071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.175247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.175258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.175538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.175549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.175831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.175842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.176126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.176137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.176459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.176470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.176760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.176770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.177107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.177118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.634 qpair failed and we were unable to recover it. 00:35:55.634 [2024-12-06 17:02:44.177421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.634 [2024-12-06 17:02:44.177432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.177756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.177767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.178044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.178056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.178258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.178270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.178505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.178516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.178824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.178835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.179119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.179130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.179407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.179417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.179584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.179596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.179905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.179916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.180251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.180263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.180604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.180614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.180940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.180951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.181246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.181257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.181556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.181566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.181849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.181860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.182177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.182188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.182478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.182489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.182772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.182783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.183085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.183095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.183411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.183422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.183575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.183587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.183871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.183882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.184159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.184170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.184451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.184462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.184738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.184750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.185040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.185051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.185330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.185341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.185622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.185633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.185911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.185922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.186252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.186263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.186548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.186559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.186717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.186728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.187042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.187053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.187365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.187377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.635 qpair failed and we were unable to recover it. 00:35:55.635 [2024-12-06 17:02:44.187654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.635 [2024-12-06 17:02:44.187666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.187997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.188008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.188313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.188324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.188603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.188614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.188896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.188907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.189205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.189216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.189506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.189516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.189791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.189802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.190076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.190086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.190293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.190304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.190503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.190514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.190815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.190825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.191128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.191140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.191435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.191446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.191722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.191733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.192030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.192041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.192350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.192361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.192688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.192699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.192980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.192991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.193308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.193319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.193606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.193616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.193942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.193952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.194242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.194255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.194549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.194560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.194859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.194870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.195153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.195164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.195459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.195470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.195769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.195780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.196081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.196092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.196427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.196438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.196740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.196751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.197040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.197052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.197322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.197333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.197611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.197622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.197910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.197921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.198220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.198231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.198548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.198559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.198831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.636 [2024-12-06 17:02:44.198842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.636 qpair failed and we were unable to recover it. 00:35:55.636 [2024-12-06 17:02:44.199148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.199160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.199455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.199466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.199745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.199755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.200041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.200052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.200345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.200356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.200646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.200657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.200952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.200963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.201238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.201249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.201527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.201538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.201918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.201929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.202225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.202236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.202559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.202572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.202849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.202860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.203142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.203153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.203429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.203440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.203729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.203740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.204082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.204092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.204375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.204386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.204554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.204566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.204865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.204875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.205066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.205076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.205411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.205425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.205750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.205761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.206110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.206122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.206406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.206417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.206750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.206760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.206961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.206972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.207264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.207276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.207587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.207598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.207959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.207971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.208264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.208276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.208591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.208601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.208912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.208923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.209109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.209120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.209402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.209413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.209740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.209752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.210039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.637 [2024-12-06 17:02:44.210050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.637 qpair failed and we were unable to recover it. 00:35:55.637 [2024-12-06 17:02:44.210349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.210360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.210634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.210647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.210967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.210978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.211262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.211273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.211593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.211604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.211881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.211892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.212168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.212179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.212375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.212385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.212723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.212996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.213007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.213295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.213306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.213605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.213617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.213931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.213942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.214143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.214154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.214480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.214492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.214797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.214808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.215091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.215108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.215408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.215419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.215758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.215770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.216128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.216140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.216461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.216472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.216780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.216791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.217077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.217088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.217395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.217406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.217623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.217634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.217941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.217951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.218255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.218266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.218567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.218578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.218895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.218907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.219208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.219220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.219503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.219515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.219790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.219801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.219961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.219972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.220271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.220282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.220626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.220963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.220974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.221255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.221267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.221588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.221599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.638 qpair failed and we were unable to recover it. 00:35:55.638 [2024-12-06 17:02:44.221905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.638 [2024-12-06 17:02:44.221918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.222235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.222246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.222558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.222569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.222849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.222859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.223132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.223146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.223497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.223509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.223799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.223810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.224120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.224132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.224432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.224443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.224712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.224723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.225000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.225011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.225306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.225317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.225593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.225604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.225790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.225801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.226106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.226118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.226444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.226455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.226739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.226750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.227045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.227055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.227351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.227363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.227666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.227676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.227966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.227977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.228283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.228294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.228571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.228582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.228866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.228877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.229074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.229085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.229355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.229366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.229648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.229659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.229935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.229946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.230243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.230254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.230556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.230567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.230860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.230871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.231151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.231164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.231467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.231478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.231757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.231768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.232054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.232065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.232357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.232368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.232664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.232675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.232974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.232985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.233122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.639 [2024-12-06 17:02:44.233134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.639 qpair failed and we were unable to recover it. 00:35:55.639 [2024-12-06 17:02:44.233424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.233435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.233745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.233756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.234037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.234048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.234389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.234400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.234703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.234714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.235000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.235011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.235341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.235352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.235634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.235645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.235934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.235945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.236253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.236265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.236543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.236554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.236876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.236887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.237195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.237206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.237509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.237520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.237796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.237807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.238091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.238111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.238425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.238436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.238710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.238721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.239016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.239026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.239217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.239231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.239565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.239575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.239898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.239909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.240213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.240225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.240522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.240533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.240692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.240703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.241017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.241027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.241285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.241297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.241589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.241600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.241903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.241914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.242224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.242235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.242529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.242540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.242832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.242843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.243132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.243143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.243430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.243442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.640 qpair failed and we were unable to recover it. 00:35:55.640 [2024-12-06 17:02:44.243686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.640 [2024-12-06 17:02:44.243697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.244021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.244032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.244345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.244356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.244645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.244656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.244930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.244941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.245225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.245236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.245510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.245522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.245810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.245822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.246166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.246177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.246473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.246484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.246757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.246768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.247129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.247140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.247433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.247445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.247732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.247743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.248017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.248029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.248321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.248332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.248633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.248644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.248961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.248972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.249268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.249279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.249577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.249588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.249856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.249867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.250145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.250156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.250476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.250488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.250768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.250779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.251057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.251068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.251354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.251365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.251643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.251655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.251943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.251954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.252298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.252309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.252593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.252604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.252807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.252818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.253129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.253140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.253450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.253462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.253737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.253748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.254062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.254072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.254400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.254411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.254707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.254718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.254900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.254911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.255230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.641 [2024-12-06 17:02:44.255242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.641 qpair failed and we were unable to recover it. 00:35:55.641 [2024-12-06 17:02:44.255543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.255554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.255762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.255773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.256041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.256053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.256356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.256367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.256733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.256744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.257046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.257057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.257349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.257360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.257667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.257678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.257964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.257975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.258259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.258270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.258580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.258591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.258872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.258883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.259162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.259173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.259518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.259528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.259819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.259834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.260148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.260160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.260454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.260464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.260751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.260763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.261048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.261059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.261259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.261270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.261564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.261575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.261781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.261792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.262076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.262087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.262401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.262413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.262702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.262713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.262993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.263003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.263314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.263326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.263632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.263643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.263932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.263943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.264116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.264128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.264418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.264429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.264731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.264742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.265030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.265041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.265229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.265240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.265514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.265526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.265839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.265850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.266135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.266147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.266423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.266434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.642 [2024-12-06 17:02:44.266711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.642 [2024-12-06 17:02:44.266722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.642 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.267002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.267013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.267177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.267190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.267448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.267461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.267760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.267772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.268071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.268082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.268375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.268387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.268733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.268744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.269058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.269069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.269343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.269355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.269644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.269655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.269949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.269960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.270253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.270264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.270536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.270547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.270830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.270841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.271119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.271130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.271420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.271430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.271715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.271726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.272022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.272033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.272327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.272339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.272643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.272655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.272964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.272975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.273270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.273281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.273574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.273586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.273901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.273912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.274171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.274183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.274373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.274385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.274661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.274673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.274975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.274987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.275301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.275312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.275598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.275611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.275843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.275854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.276157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.276168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.276461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.276472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.276754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.276764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.277074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.277085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.277407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.277419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.277695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.277706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.277987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.643 [2024-12-06 17:02:44.277998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.643 qpair failed and we were unable to recover it. 00:35:55.643 [2024-12-06 17:02:44.278379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.278390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.278705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.278716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.278992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.279003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.279313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.279325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.279661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.279672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.280006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.280017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.280290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.280302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.280598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.280609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.280900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.280910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.281200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.281211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.281531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.281542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.281837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.281848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.282137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.282148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.282472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.282483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.282756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.282768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.283055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.283066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.283359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.283371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.283653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.283664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.283856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.283867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.284171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.284182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.284481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.284492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.284789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.284800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.285138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.285149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.285445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.285456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.285742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.285753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.286028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.286039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.286346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.286357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.286649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.286661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.287043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.287054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.287353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.287365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.287659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.287670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.287996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.288007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.288308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.288321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.288609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.288620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.288885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.288897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.289190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.289201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.289482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.289492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.289787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.289798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.644 [2024-12-06 17:02:44.290080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.644 [2024-12-06 17:02:44.290091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.644 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.290299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.290310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.290569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.290580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.290899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.290910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.291209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.291220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.291412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.291423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.291749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.291760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.292092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.292107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.292425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.292436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.292781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.292792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.293087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.293098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.293374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.293385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.293662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.293674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.293961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.293971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.294262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.294273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.294576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.294588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.294890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.294901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.295181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.295193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.645 [2024-12-06 17:02:44.295499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.645 [2024-12-06 17:02:44.295510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.645 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.295858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.295870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.296187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.296199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.296498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.296512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.296782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.296793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.297083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.297094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.297414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.297426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.297737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.297748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.298028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.298039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.298345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.298356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.298630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.298641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.298964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.298975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.299258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.919 [2024-12-06 17:02:44.299269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.919 qpair failed and we were unable to recover it. 00:35:55.919 [2024-12-06 17:02:44.299596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.299607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.299888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.299899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.300227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.300238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.300431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.300442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.300764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.300775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.300939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.300951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.301271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.301282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.301589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.301600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.301876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.301887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.302198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.302209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.302552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.302563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.302909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.302920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.303243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.303255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.303522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.303533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.303832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.303843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.304123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.304135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.304421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.304432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.304741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.304755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.305109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.305120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.305433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.305444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.305768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.305780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.306065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.306076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.306367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.306379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.306663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.306673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.306959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.306970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.307249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.307260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.307594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.307605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.307893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.307904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.308186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.308197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.308484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.308495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.308774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.308785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.308989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.309000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.309288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.309300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.309584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.309595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.920 [2024-12-06 17:02:44.309881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.920 [2024-12-06 17:02:44.309892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.920 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.310214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.310225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.310514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.310525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.310828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.310838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.311087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.311097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.311377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.311389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.311703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.311714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.311993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.312004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.312306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.312317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.312611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.312623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.312952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.312963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.313165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.313177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.313471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.313482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.313674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.313685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.314006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.314017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.314194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.314205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.314406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.314416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.314634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.314645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.314843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.314854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.315126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.315138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.315420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.315431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.315696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.315708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.315905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.315916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.316138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.316150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.316463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.316474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.316735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.316747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.317050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.317061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.317368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.317379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.317664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.317675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.317974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.317985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.318319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.318330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.318633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.318644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.319005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.319016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.319314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.319325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.319650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.319661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.319935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.319946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.921 qpair failed and we were unable to recover it. 00:35:55.921 [2024-12-06 17:02:44.320284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.921 [2024-12-06 17:02:44.320295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.320593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.320604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.320898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.320909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.321217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.321228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.321520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.321531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.321813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.321825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.322123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.322134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.322449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.322461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.322784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.322795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.323079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.323090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.323393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.323404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.323682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.323693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.323973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.323984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.324265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.324276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.324571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.324581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.324857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.324869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.325206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.325218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.325490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.325501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.325807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.325818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.325990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.326001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.326237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.326249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.326572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.326583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.326897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.326908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.327107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.327118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.327398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.327409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.327710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.327721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.328024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.328035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.328337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.328348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.328676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.328687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.328965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.328977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.329267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.329278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.329569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.329580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.329881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.329892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.330097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.330116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.330415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.922 [2024-12-06 17:02:44.330711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.922 [2024-12-06 17:02:44.330722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.922 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.331045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.331056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.331333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.331344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.331621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.331632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.331955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.331966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.332254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.332266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.332569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.332580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.332899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.332913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.333212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.333223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.333511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.333522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.333799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.333810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.334094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.334110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.334293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.334304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.334596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.334607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.334909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.334920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.335204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.335216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.335505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.335516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.335840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.335851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.336151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.336162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.336486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.336497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.336775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.336786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.337094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.337107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.337477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.337488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.337807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.337817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.338014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.338025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.338304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.338315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.338617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.338627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.338947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.338957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.339231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.339243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.339560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.339572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.339849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.339860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.340150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.340161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.340427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.340438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.340759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.340769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.341032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.341045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.923 [2024-12-06 17:02:44.341354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.923 [2024-12-06 17:02:44.341365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.923 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.341659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.341670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.341995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.342006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.342306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.342317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.342608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.342619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.342915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.342925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.343094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.343116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.343433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.343444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.343722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.343733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.344021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.344032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.344306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.344318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.344623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.344633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.344923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.344934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.345293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.345304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.345626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.345637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.345910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.345921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.346197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.346208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.346494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.346506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.346784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.346795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.347064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.347074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.347382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.347393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.347724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.347735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.348001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.348012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.348286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.348297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.348595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.348606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.348888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.348899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.349216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.349227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.349515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.349526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.349808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.349819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.350094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.350111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.350396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.924 [2024-12-06 17:02:44.350407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.924 qpair failed and we were unable to recover it. 00:35:55.924 [2024-12-06 17:02:44.350720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.350731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.351016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.351027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.351292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.351303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.351608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.351619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.351896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.351906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.352204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.352215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.352498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.352509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.352818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.352829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.353118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.353130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.353483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.353497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.353778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.353789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.354090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.354103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.354390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.354401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.354688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.354699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.354979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.354990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.355281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.355293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.355614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.355625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.355930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.355941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.356236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.356247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.356564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.356575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.356851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.356863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.357164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.357182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.357508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.357519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.357833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.357844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.358154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.358165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.358467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.358477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.358752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.358763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.359079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.359090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.359408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.359420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.359737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.359748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.360057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.360068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.360349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.360360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.360647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.360658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.360940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.360950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.361231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.925 [2024-12-06 17:02:44.361243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.925 qpair failed and we were unable to recover it. 00:35:55.925 [2024-12-06 17:02:44.361524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.361535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.361899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.361912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.362096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.362112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.362431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.362442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.362722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.362733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.363006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.363017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.363318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.363329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.363605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.363617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.363919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.363930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.364221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.364232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.364537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.364548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.364917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.364928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.365215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.365227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.365561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.365572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.365853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.365863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.366162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.366174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.366461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.366472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.366748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.366759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.367058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.367069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.367415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.367426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.367766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.367777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.368058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.368069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.368248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.368259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.368456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.368468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.368775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.368786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.369047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.369058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.369381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.369392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.369675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.369685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.369968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.369981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.370277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.370289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.370575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.370586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.370904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.370914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.371216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.371228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.371502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.371513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.371814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.926 [2024-12-06 17:02:44.371825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.926 qpair failed and we were unable to recover it. 00:35:55.926 [2024-12-06 17:02:44.372124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.372136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.372441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.372453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.372738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.372749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.373065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.373076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.373285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.373296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.373616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.373626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.373929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.373940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.374144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.374156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.374521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.374532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.374862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.374873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.375167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.375179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.375520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.375531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.375815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.375826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.376128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.376139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.376443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.376454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.376738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.376749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.377081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.377091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.377420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.377431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.377740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.377751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.378076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.378087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.378392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.378403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.378733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.378744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.379016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.379027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.379320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.379331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.379616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.379627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.379932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.379943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.380228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.380239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.380551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.380562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.380839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.380850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.381138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.381149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.381470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.381481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.381764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.381775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.382052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.382063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.382362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.382373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.382690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.382702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.382993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.927 [2024-12-06 17:02:44.383004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.927 qpair failed and we were unable to recover it. 00:35:55.927 [2024-12-06 17:02:44.383285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.383296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.383596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.383607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.383877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.383889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.384186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.384197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.384540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.384551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.384836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.384847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.385123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.385134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.385452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.385463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.385758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.385769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.386090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.386104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.386405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.386415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.386756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.386767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.387043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.387054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.387397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.387408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.387680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.387691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.388008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.388020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.388210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.388222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.388545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.388557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.388749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.388761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.389059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.389070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.389261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.389272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.389540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.389551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.389869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.389881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.390159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.390171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.390468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.390479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.390763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.390777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.391066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.391077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.391367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.391378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.391667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.391679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.392013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.392025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.392330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.392342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.392657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.392668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.392946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.392957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.393253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.393264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.393566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.393578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.928 qpair failed and we were unable to recover it. 00:35:55.928 [2024-12-06 17:02:44.393780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.928 [2024-12-06 17:02:44.393792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.394064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.394075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.394403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.394414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.394618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.394629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.394956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.394967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.395307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.395318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.395638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.395650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.395934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.395946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.396191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.396202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.396509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.396519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.396861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.396872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.397145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.397156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.397485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.397496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.397841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.397852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.398140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.398151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.398440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.398451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.398777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.398788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.399067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.399081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.399388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.399400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.399608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.399618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.399944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.399955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.400329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.400341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.400667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.400678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.400972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.400983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.401282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.401294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.401593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.401604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.401894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.401906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.402272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.402284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.402611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.402622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.929 qpair failed and we were unable to recover it. 00:35:55.929 [2024-12-06 17:02:44.402901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.929 [2024-12-06 17:02:44.402913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.403203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.403214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.403380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.403392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.403710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.403721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.404022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.404033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.404359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.404370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.404678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.404689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.404989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.405000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.405283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.405294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.405577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.405589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.405799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.405811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.406146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.406158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.406498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.406510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.406787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.406798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.407094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.407107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.407400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.407411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.407707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.407718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.408006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.408017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.408308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.408320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.408608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.408620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.408903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.408914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.409195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.409207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.409485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.409497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.409780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.409791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.410091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.410104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.410422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.410432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.410717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.410728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.411016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.411027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.411303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.411314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.411605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.411616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.411895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.411906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.412211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.412222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.412498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.412509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.412777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.412787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.413063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.413074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.413407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.930 [2024-12-06 17:02:44.413417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.930 qpair failed and we were unable to recover it. 00:35:55.930 [2024-12-06 17:02:44.413621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.413632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.413951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.413962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.414275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.414287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.414596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.414607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.414912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.414923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.415215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.415227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.415501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.415512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.415796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.415807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.416112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.416123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.416403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.416414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.416727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.416738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.417023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.417034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.417318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.417330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.417633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.417644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.417926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.417938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.418229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.418240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.418534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.418545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.418860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.418871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.419165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.419177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.419493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.419507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.419804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.419817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2515495 Killed "${NVMF_APP[@]}" "$@" 00:35:55.931 [2024-12-06 17:02:44.420117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.420130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.420490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.420501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:35:55.931 [2024-12-06 17:02:44.420793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.420804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.421114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:35:55.931 [2024-12-06 17:02:44.421125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:55.931 [2024-12-06 17:02:44.421453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.421465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:55.931 [2024-12-06 17:02:44.421750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.421761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.931 [2024-12-06 17:02:44.422067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.422079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.422381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.422393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.422694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.422705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.423002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.423013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.423272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.423283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.423590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.423600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.931 qpair failed and we were unable to recover it. 00:35:55.931 [2024-12-06 17:02:44.423869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.931 [2024-12-06 17:02:44.423879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.424162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.424174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.424365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.424377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.424692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.424704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.424986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.424998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.425321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.425332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.425655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.425667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.426019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.426030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.426351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.426363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.426666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.426677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.427040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.427051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.427371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.427382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.427697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.427709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2516533 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2516533 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2516533 ']' 00:35:55.932 [2024-12-06 17:02:44.428008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.428019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:55.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.932 [2024-12-06 17:02:44.428325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.428336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:55.932 [2024-12-06 17:02:44.428629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.428641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.428949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.428960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.429269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.429281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.429663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.429674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.429842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.429853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.430165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.430177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.430459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.430471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.430757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.430768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.430987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.430998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.431339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.431351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.431652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.431664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.431952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.431962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.432145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.432157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.432380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.432393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.432706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.432718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.433038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.932 [2024-12-06 17:02:44.433050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.932 qpair failed and we were unable to recover it. 00:35:55.932 [2024-12-06 17:02:44.433367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.433379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.433611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.433623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.433827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.433839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.434176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.434189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.434520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.434533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.434824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.434836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.435146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.435158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.435441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.435453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.435731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.435742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.435914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.435926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.436216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.436228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.436546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.436558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.436842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.436854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.437197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.437209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.437502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.437514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.437820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.437832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.438092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.438109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.438420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.438432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.438736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.438748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.439071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.439082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.439281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.439292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.439573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.439584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.439887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.439899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.440207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.440218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.440395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.440406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.440573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.440584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.440787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.440799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.441096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.441111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.441282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.441293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.441637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.441648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.441971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.441982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.442288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.442299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.442605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.442616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.442899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.442910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.443229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.933 [2024-12-06 17:02:44.443241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.933 qpair failed and we were unable to recover it. 00:35:55.933 [2024-12-06 17:02:44.443545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.443557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.443802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.443814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.444125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.444137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.444462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.444473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.444719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.444730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.444949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.444961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.445285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.445297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.445603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.445614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.445893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.445904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.446228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.446239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.446561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.446572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.446892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.446904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.447214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.447225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.447541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.447876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.447888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.448211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.448223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.448545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.448556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.448831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.448842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.449126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.449138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.449325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.449337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.449512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.449522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.449799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.449811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.450122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.450134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.450440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.450451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.450784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.450795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.451099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.451119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.451407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.451418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.451594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.451606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.451945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.451955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.452269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.452281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.452599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.452610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.452926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.934 [2024-12-06 17:02:44.452937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.934 qpair failed and we were unable to recover it. 00:35:55.934 [2024-12-06 17:02:44.453244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.453255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.453577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.453588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.453880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.453891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.454087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.454102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.454440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.454452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.454757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.454769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.455092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.455108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.455477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.455489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.455777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.455788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.456073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.456084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.456438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.456449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.456735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.456747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.457071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.457082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.457410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.457421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.457605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.457617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.457792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.457803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.457980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.457992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.458173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.458186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.458538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.458549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.458874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.458886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.459218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.459230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.459547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.459558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.459842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.459853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.460134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.460145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.460487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.460499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.460796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.460808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.461004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.461016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.461337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.461349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.461539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.461550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.461728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.461738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.462017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.462027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.462349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.462360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.462547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.462558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.462849] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:35:55.935 [2024-12-06 17:02:44.462886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.935 [2024-12-06 17:02:44.462895] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:[2024-12-06 17:02:44.462897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 wit5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-typeh addr=10.0.0.2, port=4420 00:35:55.935 =auto ] 00:35:55.935 qpair failed and we were unable to recover it. 00:35:55.935 [2024-12-06 17:02:44.463274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.463286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.463479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.463488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.463787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.463798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.464110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.464121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.464433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.464444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.464734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.464745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.465031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.465043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.465219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.465231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.465542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.465554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.465729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.465743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.465997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.466009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.466317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.466328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.466664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.466676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.466987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.466999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.467163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.467174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.467555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.467566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.467866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.467877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.468157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.468169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.468459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.468470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.468745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.468757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.468805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.468816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.469060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.469071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.469291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.469304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.469592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.469603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.469907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.469918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.470197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.470209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.470508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.470520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.470820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.470831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.471116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.471129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.471476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.471488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.471778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.471790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.472119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.472131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.472466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.472478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.472779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.472791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.473073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.473085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.936 qpair failed and we were unable to recover it. 00:35:55.936 [2024-12-06 17:02:44.473419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.936 [2024-12-06 17:02:44.473431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.473714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.473728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.474017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.474028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.474218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.474230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.474576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.474588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.474905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.474917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.475208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.475220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.475537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.475549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.475832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.475844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.476190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.476202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.476528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.476540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.476848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.476860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.477190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.477201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.477518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.477530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.477874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.477886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.478061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.478072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.478261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.478273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.478585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.478597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.478936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.478947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.479292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.479303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.479606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.479617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.479932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.479943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.480249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.480261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.480563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.480575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.480764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.480776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.481045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.481056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.481364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.481376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.481435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.481447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.481731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.481744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.481953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.481964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.482260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.482272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.482583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.482594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.482878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.482889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.483182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.483194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.483558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.483570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.483870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.483881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.937 qpair failed and we were unable to recover it. 00:35:55.937 [2024-12-06 17:02:44.484076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.937 [2024-12-06 17:02:44.484087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.484408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.484420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.484696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.484707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.484992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.485005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.485339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.485350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.485641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.485932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.485943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.486244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.486255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.486573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.486583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.486889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.486901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.487094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.487116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.487426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.487437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.487602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.487614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.487936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.487947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.488109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.488121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.488423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.488434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.488721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.488732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.488960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.488970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.489253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.489264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.489557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.489568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.489750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.489762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.489939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.489949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.490239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.490250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.490574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.490585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.490870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.490881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.491170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.491182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.491505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.491697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.491708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.492005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.492015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.492204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.492216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.492491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.492502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.492812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.492823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.492875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.492886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.493162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.493173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.493550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.493561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.938 qpair failed and we were unable to recover it. 00:35:55.938 [2024-12-06 17:02:44.493865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.938 [2024-12-06 17:02:44.493876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.494199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.494210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.494502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.494513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.494816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.494827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.495140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.495151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.495463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.495475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.495662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.495674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.495983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.495995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.496353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.496364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.496649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.496660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.496984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.496995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.497314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.497324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.497642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.497652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.497974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.497985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.498318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.498330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.498686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.498698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.498995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.499006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.499306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.499317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.499614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.499625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.499911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.499922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.500203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.500215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.500522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.500534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.500853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.500863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.501148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.501160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.501338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.501350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.501649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.501664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.501851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.501862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.502142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.502153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.502390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.502402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.502558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.502569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.939 qpair failed and we were unable to recover it. 00:35:55.939 [2024-12-06 17:02:44.502907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.939 [2024-12-06 17:02:44.502918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.503256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.503267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.503570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.503581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.503857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.503869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.504167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.504179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.504485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.504496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.504806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.504817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.505116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.505128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.505450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.505461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.505638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.505650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.505963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.505974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.506298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.506309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.506596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.506607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.506938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.506949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.507260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.507272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.507557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.507567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.507731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.507743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.508053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.508064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.508400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.508411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.508694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.508706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.508985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.508997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.509203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.509214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.509530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.509544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.509891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.509903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.510215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.510226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.510547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.510558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.510868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.510878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.511176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.511187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.511498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.511509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.511825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.511836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.512135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.512146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.512353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.512365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.512677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.512687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.512978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.512989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.513275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.513286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.940 [2024-12-06 17:02:44.513588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.940 [2024-12-06 17:02:44.513598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.940 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.513891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.513902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.514218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.514230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.514543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.514555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.514883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.514894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.515072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.515083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.515418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.515430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.515614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.515625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.515832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.515843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.516149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.516160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.516500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.516511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.516855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.516866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.517200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.517211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.517513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.517524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.517816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.517827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.518118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.518129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.518467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.518478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.518785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.518796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.519088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.519098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.519480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.519491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.519790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.519801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.520117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.520128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.520303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.520315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.520651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.520662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.520953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.520964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.521282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.521294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.521614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.521625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.521939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.521950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.522259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.522272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.522466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.522477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.522796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.522807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.523086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.523097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.523418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.523429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.523737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.523748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.523922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.523934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.524197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.941 [2024-12-06 17:02:44.524209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.941 qpair failed and we were unable to recover it. 00:35:55.941 [2024-12-06 17:02:44.524524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.524535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.524884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.524895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.525121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.525132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.525444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.525455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.525767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.525778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.525956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.525968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.526281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.526293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.526479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.526490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.526778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.526789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.527082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.527093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.527258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.527270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.527580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.527591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.527889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.527900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.528186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.528197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.528577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.528588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.528905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.528916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.529228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.529239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.529585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.529597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.529886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.529897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.530272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.530286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.530573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.530584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.530813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.530824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.531165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.531175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.531490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.531500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.531790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.531801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.532098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.532113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.532270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.532283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.532594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.532605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.532783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.532794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.533119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.533130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.533461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.942 [2024-12-06 17:02:44.533472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.942 qpair failed and we were unable to recover it. 00:35:55.942 [2024-12-06 17:02:44.533761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.533771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.533965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.533977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.534327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.534339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.534486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.534496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.534814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.534825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.535118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.535131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.535307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.535318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.535326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:55.943 [2024-12-06 17:02:44.535643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.535655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.535848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.535860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.536053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.536064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.536338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.536349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.536659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.536670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.536852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.536864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.537201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.537212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.537508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.537519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.537819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.537831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.538030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.538041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.538393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.538405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.538645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.538657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.538855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.538867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.539152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.539163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.539489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.539502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.539871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.539882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.540207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.540218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.540422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.540434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.540710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.540721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.541022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.541033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.541348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.541360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.541660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.541672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.541966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.541977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.542267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.542278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.542469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.542481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.542794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.542806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.543087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.543098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.543390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.543401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.543690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.943 [2024-12-06 17:02:44.543702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.943 qpair failed and we were unable to recover it. 00:35:55.943 [2024-12-06 17:02:44.543979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.543990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.544333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.544344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.544541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.544552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.544748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.544759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.545097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.545111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.545433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.545444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.545751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.545762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.546055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.546066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.546381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.546393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.546699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.546710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.546990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.547003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.547283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.547295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.547591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.547602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.547794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.547806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.548117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.548130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.548320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.548331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.548541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.548553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.548865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.548877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.549220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.549233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.549561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.549572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.549820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.549832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.550160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.550170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.550482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.550494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.550825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.550837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.551122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.551134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.551303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.551314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.551460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:55.944 [2024-12-06 17:02:44.551485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:55.944 [2024-12-06 17:02:44.551491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:55.944 [2024-12-06 17:02:44.551497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:55.944 [2024-12-06 17:02:44.551502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:55.944 [2024-12-06 17:02:44.551664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.551674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.551975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.551985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.944 qpair failed and we were unable to recover it. 00:35:55.944 [2024-12-06 17:02:44.552266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.944 [2024-12-06 17:02:44.552277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.552585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.552596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.552885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.552896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.552888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:55.945 [2024-12-06 17:02:44.553052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:55.945 [2024-12-06 17:02:44.553202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.553213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.553207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:35:55.945 [2024-12-06 17:02:44.553383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:55.945 [2024-12-06 17:02:44.553450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.553460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.553675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.553687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.554010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.554022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.554346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.554358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.554690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.554701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.555054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.555065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.555288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.555300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.555509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.555520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.555792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.555803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.556111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.556123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.556302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.556314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.556638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.556649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.556933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.556945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.557289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.557301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.557650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.557661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.558033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.558045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.558380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.558391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.558757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.558768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.558987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.558998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.559386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.559397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.559589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.559601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.559966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.559978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.560179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.560190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.560371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.560382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.560577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.560588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.560786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.560798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.561063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.561075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.561417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.561429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.561724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.561735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.562032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.562043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.945 qpair failed and we were unable to recover it. 00:35:55.945 [2024-12-06 17:02:44.562227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.945 [2024-12-06 17:02:44.562239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.562566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.562578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.562904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.562915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.563227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.563239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.563588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.563599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.563977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.563989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.564329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.564341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.564522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.564534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.564872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.564885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.565189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.565201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.565626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.565637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.565820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.565831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.566170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.566182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.566383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.566394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.566593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.566604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.566963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.566974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.567183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.567553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.567564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.567923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.567934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.568112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.568123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.568437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.568449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.568787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.568798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.569168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.569182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.569557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.569568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.569718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.569730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.570063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.570074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.570254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.570267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.570587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.570599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.570804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.570815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.570962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.570973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.571271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.571283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.571655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.571666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.572044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.572055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.572376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.572388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.572678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.946 [2024-12-06 17:02:44.572689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.946 qpair failed and we were unable to recover it. 00:35:55.946 [2024-12-06 17:02:44.573013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.573025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.573401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.573412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.573597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.573608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.573964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.573976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.574220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.574231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.574432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.574444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.574778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.574789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.575166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.575179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.575420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.575431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.575790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.575802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.576117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.576128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.576483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.576495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.576800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.576811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.577021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.577032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.577230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.577245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.577606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.577617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.577804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.577815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.578009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.578021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.578337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.578349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.578688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.578700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.578901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.578913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.579278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.579289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.579579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.579590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.579961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.579972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.580269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.580280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.580512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.580524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.580848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.580859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.581157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.581167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.581611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.581622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.581950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.581961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.582257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.582269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.582610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.582621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.582962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.582973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.583153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.583165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.583498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.583509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.583677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.583689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.947 qpair failed and we were unable to recover it. 00:35:55.947 [2024-12-06 17:02:44.583956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.947 [2024-12-06 17:02:44.583967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.584152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.584164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.584529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.584542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.584763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.584774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.585112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.585124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.585172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.585182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.585377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.585388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.585757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.585769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.586112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.586124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.586489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.586501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.586775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.586786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.586937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.586956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.587361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.587372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.587733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.587745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.588132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.588143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.588389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.588400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.588709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.588720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.588908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.588919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.589281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.589292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2441310 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.589389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2446e30 is same with the state(6) to be set 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Read completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 Write completed with error (sct=0, sc=8) 00:35:55.948 starting I/O failed 00:35:55.948 [2024-12-06 17:02:44.590268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:55.948 [2024-12-06 17:02:44.590748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.590772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.591001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.591012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.591331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.591370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.591737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.591752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.948 [2024-12-06 17:02:44.591916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.948 [2024-12-06 17:02:44.591930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.948 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.592234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.592246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.592599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.592611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.592940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.592952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.593218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.593230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.593416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.593428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.593633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.593641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.594000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.594008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.594149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.594157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.594360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.594367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.594734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.594743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.595111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.595119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.595447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.595455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.595640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.595656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.595949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.595957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.596276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.596286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.596606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.596615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.596936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.596944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.597161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.597169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.597518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.597525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:55.949 [2024-12-06 17:02:44.597852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:55.949 [2024-12-06 17:02:44.597861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:55.949 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.598194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.598203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.598544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.598552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.598887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.598895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.598942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.598948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.599228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.599236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.599567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.599576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.599908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.599918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.600243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.600251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.600499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.600508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.600541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.600547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.600875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.600883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.601212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.216 [2024-12-06 17:02:44.601220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.216 qpair failed and we were unable to recover it. 00:35:56.216 [2024-12-06 17:02:44.601562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.601570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.601790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.601797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.602109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.602118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.602321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.602329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.602506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.602514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.602781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.602789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.603116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.603125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.603439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.603447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.603749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.603758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.603957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.603966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.604137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.604145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.604428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.604436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.604620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.604629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.604851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.604859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.605179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.605188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.605522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.605531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.605930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.605939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.606292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.606301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.606495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.606504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.606853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.606861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.607225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.607233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.607276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.607283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.607450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.607459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.607648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.607655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.608065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.608072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.608112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.608119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.608333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.608341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.608715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.608722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.608898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.608906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.609108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.609116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.609417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.609425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.609783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.609792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.609947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.609954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.610149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.610157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.610484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.610492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.610851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.610861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.611204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.611212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.217 [2024-12-06 17:02:44.611403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.217 [2024-12-06 17:02:44.611410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.217 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.611695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.611703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.611751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.611758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.612069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.612077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.612401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.612409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.612713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.612721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.613057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.613065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.613260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.613268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.613569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.613577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.613899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.614163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.614172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.614540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.614548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.614863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.614871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.615220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.615228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.615562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.615570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.615889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.615897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.616185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.616193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.616385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.616393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.616684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.616692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.616849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.616857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.617145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.617153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.617456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.617464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.617650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.617659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.617985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.617993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.618308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.618316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.618492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.618502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.618826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.618834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.619143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.619152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.619480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.619488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.619789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.619797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.620051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.620059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.620381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.620389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.620574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.620582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.620896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.620904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.621083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.621092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.621292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.621300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.621661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.621669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 [2024-12-06 17:02:44.621844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.218 [2024-12-06 17:02:44.621853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.218 qpair failed and we were unable to recover it. 00:35:56.218 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.218 [2024-12-06 17:02:44.622188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.622198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:35:56.219 [2024-12-06 17:02:44.622557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.622565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.622742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.622750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:56.219 [2024-12-06 17:02:44.622936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.622944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:56.219 [2024-12-06 17:02:44.623304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.623312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.219 [2024-12-06 17:02:44.623621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.623629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.623811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.623820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.624020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.624028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.624366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.624375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.624802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.624810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.625113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.625122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.625412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.625420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.625778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.625787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.626109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.626118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.626454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.626462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.626805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.626814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.627021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.627030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.627253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.627262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.627460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.627469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.627735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.627743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.627978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.627986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.628292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.628300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.628676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.628686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.628984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.628992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.629179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.629187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.629557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.629568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.629751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.629759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.630061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.630069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.630257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.630266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.630562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.630570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.630898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.630906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.631226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.631234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.631571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.631579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.631899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.631907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.632259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.632267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.632579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.632587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.219 [2024-12-06 17:02:44.632807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.219 [2024-12-06 17:02:44.632817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.219 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.633001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.633011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.633187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.633195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.633531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.633540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.633728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.633737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.634046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.634055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.634342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.634351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.634658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.634667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.634962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.635350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.635359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.635640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.635648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.635960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.635969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.636280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.636289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.636602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.636611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.636776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.636785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.636952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.636961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.637138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.637147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.637437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.637446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.637755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.637764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.637937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.637946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.638259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.638268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.638640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.638649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.638932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.638940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.639110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.639120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.639466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.639475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.639509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.639516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.639881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.639890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.640071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.640080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.640395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.640404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.640682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.640692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.640885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.640894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.641217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.641226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.641543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.641551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.641587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.641594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.641780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.641788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.642033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.642042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.642387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.642395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.642580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.642589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.220 [2024-12-06 17:02:44.642881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.220 [2024-12-06 17:02:44.642890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.220 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.643075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.643084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.643495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.643504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.643815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.643824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.644107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.644115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.644485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.644494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.644789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.644797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.645084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.645092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.645436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.645444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.645704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.645713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.646012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.646022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.646186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.646195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.646495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.646504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.646822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.646832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.647119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.647128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.647447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.647456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.647596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.647605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.647903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.647910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:56.221 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:56.221 [2024-12-06 17:02:44.648218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.221 [2024-12-06 17:02:44.648228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.221 [2024-12-06 17:02:44.648556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.648565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.648870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.648879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.649041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.649051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.649358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.649366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.649537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.649546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.649854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.649863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.649901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.649909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.650190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.650198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.650511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.650519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.650703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.650712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.650748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.650757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.651066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.651073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.221 qpair failed and we were unable to recover it. 00:35:56.221 [2024-12-06 17:02:44.651452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.221 [2024-12-06 17:02:44.651460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.651495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.651502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.651804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.651812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.652141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.652149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.652449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.652457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.652621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.652630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.652937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.652946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.653133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.653141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.653472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.653480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.653795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.653803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.654085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.654093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.654410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.654419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.654575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.654583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.654920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.654928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.655243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.655251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.655546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.655554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.655711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.655719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.656037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.656046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.656230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.656238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.656509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.656517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.656828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.656836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.656993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.657002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.657187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.657195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.657544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.657553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.657874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.657882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.658171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.658179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.658377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.658385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.658699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.658707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.658993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.659001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.659174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.659183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.659364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.659371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.659716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.659724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.659914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.659923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.660221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.660230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.660411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.660419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.660734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.660742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.660936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.660944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.661221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.661229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.222 qpair failed and we were unable to recover it. 00:35:56.222 [2024-12-06 17:02:44.661615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.222 [2024-12-06 17:02:44.661625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.661942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.661950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.662264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.662272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.662434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.662443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.662802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.662810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.663153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.663161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.663323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.663330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.663644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.663651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.663990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.663997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.664166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.664174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.664354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.664362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.664522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.664531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.664856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.664864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.665178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.665186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.665472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.665481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.665639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.665648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.665930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.665939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.666224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.666232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.666546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.666554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.666871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.666880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.667062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.667107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.667115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.667308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.667316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.667646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.667654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.668012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.668021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.668190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.668199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.668517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.668526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.668681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.668690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.668973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.668982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.669188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.669197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.669480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.669488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.669811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.669819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.670002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.670011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.670341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.670349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.670640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.670648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.670820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.670829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.670999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.671007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.671408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.671417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.223 qpair failed and we were unable to recover it. 00:35:56.223 [2024-12-06 17:02:44.671576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.223 [2024-12-06 17:02:44.671585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.671936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.671945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.672170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.672180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.672390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.672399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.672577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.672585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.672829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.672837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.673161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.673169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.673327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.673334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.673650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.673658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.673858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.673866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 Malloc0 00:35:56.224 [2024-12-06 17:02:44.674178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.674187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.674516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.674525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.674821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.674830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.224 [2024-12-06 17:02:44.675174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.675183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:35:56.224 [2024-12-06 17:02:44.675385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.675394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.675586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.675595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.224 [2024-12-06 17:02:44.675937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.675945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.676324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.676332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.676514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.676522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.676707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.676715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.677043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.677051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.677229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.677239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.677590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.677598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.677924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.677932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.678334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.678343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.678534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.678541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.678856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.678864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.679037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.679046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.679416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.679424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.679589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.679597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.679860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.679868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.680176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.680184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.680249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.680256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.680612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.680619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.680800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.680808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.681154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.681162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.681370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.224 [2024-12-06 17:02:44.681378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.224 qpair failed and we were unable to recover it. 00:35:56.224 [2024-12-06 17:02:44.681548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.225 [2024-12-06 17:02:44.681692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.681699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.681862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.681870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.682174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.682182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.682494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.682503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.682796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.682804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.683152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.683161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.683481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.683489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.683781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.683789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.684094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.684105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.684290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.684298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.684469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.684477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.684631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.684639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.684974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.684981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.685234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.685242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.685517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.685524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.685730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.685738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.686046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.686054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.225 [2024-12-06 17:02:44.686337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.686345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:56.225 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.225 [2024-12-06 17:02:44.686560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.686569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.686855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.686864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.687194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.687202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.687512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.687520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.687684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.687691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.688038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.688046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.688364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.688372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.688694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.688702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.688982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.688990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.689177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.689185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.689484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.689493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.689664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.689671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.689851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.689858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.225 qpair failed and we were unable to recover it. 00:35:56.225 [2024-12-06 17:02:44.690127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.225 [2024-12-06 17:02:44.690136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.690466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.690474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.690783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.690791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.691007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.691015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.691188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.691195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.691238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.691246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.691610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.691617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.691937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.691945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.692108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.692116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.692293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.692300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.692673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.692681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.692997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.693005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.693307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.693658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.693666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.693844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.693854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.694197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.694206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.226 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:56.226 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.226 [2024-12-06 17:02:44.694539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.694547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.226 [2024-12-06 17:02:44.694710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.694719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.694772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.694780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.695166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.695175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.695517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.695525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.695720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.695728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.696045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.696053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.696347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.696355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.696635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.696643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.696804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.696813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.697096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.697106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.697356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.697364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.697670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.697678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.697850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.697859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.698026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.698034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.698333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.698341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.698684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.698693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.699015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.699023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.699333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.699341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.226 [2024-12-06 17:02:44.699534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.226 [2024-12-06 17:02:44.699545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.226 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.699844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.699853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.700159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.700167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.700358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.700367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.700689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.700697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.700990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.700998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.701309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.701318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.701648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.701656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.701969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.701978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.702155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.702165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:56.227 [2024-12-06 17:02:44.702482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.702490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.227 [2024-12-06 17:02:44.702776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.702784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.702934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.702942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.703011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.703020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.703363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.703371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.703670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.703679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.703871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.703879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.704053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.704062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.704268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.704279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.704588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.704596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.704882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.704891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.705145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.705154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.705348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.705356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.705644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:56.227 [2024-12-06 17:02:44.705652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f89e4000b90 with addr=10.0.0.2, port=4420 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.705918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:56.227 [2024-12-06 17:02:44.712169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-12-06 17:02:44.712232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-12-06 17:02:44.712246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-12-06 17:02:44.712252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-12-06 17:02:44.712258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.227 [2024-12-06 17:02:44.712273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.227 17:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2515719 00:35:56.227 [2024-12-06 17:02:44.722126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-12-06 17:02:44.722172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-12-06 17:02:44.722182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-12-06 17:02:44.722187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-12-06 17:02:44.722192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.227 [2024-12-06 17:02:44.722203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.732130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-12-06 17:02:44.732176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-12-06 17:02:44.732186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-12-06 17:02:44.732191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.227 [2024-12-06 17:02:44.732195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.227 [2024-12-06 17:02:44.732206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.227 qpair failed and we were unable to recover it. 00:35:56.227 [2024-12-06 17:02:44.742130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.227 [2024-12-06 17:02:44.742175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.227 [2024-12-06 17:02:44.742185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.227 [2024-12-06 17:02:44.742190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.742195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.742210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.752057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.752106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.752116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.752121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.752126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.752137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.762159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.762218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.762228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.762234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.762240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.762250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.772140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.772186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.772195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.772201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.772205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.772217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.782041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.782088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.782103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.782109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.782113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.782124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.792198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.792243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.792254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.792259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.792264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.792275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.802228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.802270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.802280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.802285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.802290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.802300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.812229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.812281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.812291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.812296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.812301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.812311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.822136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.822179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.822188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.822193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.822198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.822209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.832314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.832357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.832369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.832374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.832379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.832389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.842342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.842382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.842391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.842397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.842401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.842412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.852357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.852399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.852408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.852414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.852418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.852428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.862241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.862283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.862293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.862298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.228 [2024-12-06 17:02:44.862303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.228 [2024-12-06 17:02:44.862314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.228 qpair failed and we were unable to recover it. 00:35:56.228 [2024-12-06 17:02:44.872399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.228 [2024-12-06 17:02:44.872475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.228 [2024-12-06 17:02:44.872484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.228 [2024-12-06 17:02:44.872489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.229 [2024-12-06 17:02:44.872497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.229 [2024-12-06 17:02:44.872508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.229 qpair failed and we were unable to recover it. 00:35:56.229 [2024-12-06 17:02:44.882429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.229 [2024-12-06 17:02:44.882467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.229 [2024-12-06 17:02:44.882477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.229 [2024-12-06 17:02:44.882482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.229 [2024-12-06 17:02:44.882487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.229 [2024-12-06 17:02:44.882497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.229 qpair failed and we were unable to recover it. 00:35:56.229 [2024-12-06 17:02:44.892465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.229 [2024-12-06 17:02:44.892509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.229 [2024-12-06 17:02:44.892520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.229 [2024-12-06 17:02:44.892525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.229 [2024-12-06 17:02:44.892530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.229 [2024-12-06 17:02:44.892541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.229 qpair failed and we were unable to recover it. 00:35:56.499 [2024-12-06 17:02:44.902472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.499 [2024-12-06 17:02:44.902517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.499 [2024-12-06 17:02:44.902528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.499 [2024-12-06 17:02:44.902533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.499 [2024-12-06 17:02:44.902538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.499 [2024-12-06 17:02:44.902548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.499 qpair failed and we were unable to recover it. 00:35:56.499 [2024-12-06 17:02:44.912553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.499 [2024-12-06 17:02:44.912595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.499 [2024-12-06 17:02:44.912605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.499 [2024-12-06 17:02:44.912610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.499 [2024-12-06 17:02:44.912615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.499 [2024-12-06 17:02:44.912625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.499 qpair failed and we were unable to recover it. 00:35:56.499 [2024-12-06 17:02:44.922550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.499 [2024-12-06 17:02:44.922640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.499 [2024-12-06 17:02:44.922650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.500 [2024-12-06 17:02:44.922655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.500 [2024-12-06 17:02:44.922660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.500 [2024-12-06 17:02:44.922671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.500 qpair failed and we were unable to recover it. 00:35:56.500 [2024-12-06 17:02:44.932624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.500 [2024-12-06 17:02:44.932662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.500 [2024-12-06 17:02:44.932672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.500 [2024-12-06 17:02:44.932677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.500 [2024-12-06 17:02:44.932681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.500 [2024-12-06 17:02:44.932692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.500 qpair failed and we were unable to recover it. 00:35:56.500 [2024-12-06 17:02:44.942599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.500 [2024-12-06 17:02:44.942640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.500 [2024-12-06 17:02:44.942649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.500 [2024-12-06 17:02:44.942654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.500 [2024-12-06 17:02:44.942659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.500 [2024-12-06 17:02:44.942670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.500 qpair failed and we were unable to recover it. 00:35:56.500 [2024-12-06 17:02:44.952658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.500 [2024-12-06 17:02:44.952747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.500 [2024-12-06 17:02:44.952757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.500 [2024-12-06 17:02:44.952762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.500 [2024-12-06 17:02:44.952767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.500 [2024-12-06 17:02:44.952777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.500 qpair failed and we were unable to recover it. 00:35:56.500 [2024-12-06 17:02:44.962631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.500 [2024-12-06 17:02:44.962671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.500 [2024-12-06 17:02:44.962683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.500 [2024-12-06 17:02:44.962689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.500 [2024-12-06 17:02:44.962693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.500 [2024-12-06 17:02:44.962704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.500 qpair failed and we were unable to recover it. 00:35:56.500 [2024-12-06 17:02:44.972687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.500 [2024-12-06 17:02:44.972725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.500 [2024-12-06 17:02:44.972736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.501 [2024-12-06 17:02:44.972741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.501 [2024-12-06 17:02:44.972745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.501 [2024-12-06 17:02:44.972756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.501 qpair failed and we were unable to recover it. 00:35:56.501 [2024-12-06 17:02:44.982717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.501 [2024-12-06 17:02:44.982757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.501 [2024-12-06 17:02:44.982768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.501 [2024-12-06 17:02:44.982773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.501 [2024-12-06 17:02:44.982778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.501 [2024-12-06 17:02:44.982788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.501 qpair failed and we were unable to recover it. 00:35:56.501 [2024-12-06 17:02:44.992711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.501 [2024-12-06 17:02:44.992753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.501 [2024-12-06 17:02:44.992763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.501 [2024-12-06 17:02:44.992768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.501 [2024-12-06 17:02:44.992773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.501 [2024-12-06 17:02:44.992783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.501 qpair failed and we were unable to recover it. 00:35:56.501 [2024-12-06 17:02:45.002758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.501 [2024-12-06 17:02:45.002794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.501 [2024-12-06 17:02:45.002804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.501 [2024-12-06 17:02:45.002812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.501 [2024-12-06 17:02:45.002817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.501 [2024-12-06 17:02:45.002828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.501 qpair failed and we were unable to recover it. 00:35:56.501 [2024-12-06 17:02:45.012643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.501 [2024-12-06 17:02:45.012686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.501 [2024-12-06 17:02:45.012696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.501 [2024-12-06 17:02:45.012702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.501 [2024-12-06 17:02:45.012707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.501 [2024-12-06 17:02:45.012717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.501 qpair failed and we were unable to recover it. 00:35:56.501 [2024-12-06 17:02:45.022825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.501 [2024-12-06 17:02:45.022866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.501 [2024-12-06 17:02:45.022876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.501 [2024-12-06 17:02:45.022881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.501 [2024-12-06 17:02:45.022886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.502 [2024-12-06 17:02:45.022897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.502 qpair failed and we were unable to recover it. 00:35:56.502 [2024-12-06 17:02:45.032831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.502 [2024-12-06 17:02:45.032904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.502 [2024-12-06 17:02:45.032923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.502 [2024-12-06 17:02:45.032929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.502 [2024-12-06 17:02:45.032935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.502 [2024-12-06 17:02:45.032949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.502 qpair failed and we were unable to recover it. 00:35:56.502 [2024-12-06 17:02:45.042863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.502 [2024-12-06 17:02:45.042910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.502 [2024-12-06 17:02:45.042929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.502 [2024-12-06 17:02:45.042936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.502 [2024-12-06 17:02:45.042941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.502 [2024-12-06 17:02:45.042956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.502 qpair failed and we were unable to recover it. 00:35:56.502 [2024-12-06 17:02:45.052887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.502 [2024-12-06 17:02:45.052947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.502 [2024-12-06 17:02:45.052966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.502 [2024-12-06 17:02:45.052972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.502 [2024-12-06 17:02:45.052977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.502 [2024-12-06 17:02:45.052992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.503 qpair failed and we were unable to recover it. 00:35:56.503 [2024-12-06 17:02:45.062913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.503 [2024-12-06 17:02:45.062954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.503 [2024-12-06 17:02:45.062966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.503 [2024-12-06 17:02:45.062972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.503 [2024-12-06 17:02:45.062977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.503 [2024-12-06 17:02:45.062988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.503 qpair failed and we were unable to recover it. 00:35:56.503 [2024-12-06 17:02:45.072804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.503 [2024-12-06 17:02:45.072849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.503 [2024-12-06 17:02:45.072860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.503 [2024-12-06 17:02:45.072865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.503 [2024-12-06 17:02:45.072870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.503 [2024-12-06 17:02:45.072881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.503 qpair failed and we were unable to recover it. 00:35:56.503 [2024-12-06 17:02:45.082977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.503 [2024-12-06 17:02:45.083014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.503 [2024-12-06 17:02:45.083024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.503 [2024-12-06 17:02:45.083029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.503 [2024-12-06 17:02:45.083033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.503 [2024-12-06 17:02:45.083044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.503 qpair failed and we were unable to recover it. 00:35:56.503 [2024-12-06 17:02:45.092982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.503 [2024-12-06 17:02:45.093023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.503 [2024-12-06 17:02:45.093034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.503 [2024-12-06 17:02:45.093039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.503 [2024-12-06 17:02:45.093044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.503 [2024-12-06 17:02:45.093054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.503 qpair failed and we were unable to recover it. 00:35:56.503 [2024-12-06 17:02:45.103013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.503 [2024-12-06 17:02:45.103054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.504 [2024-12-06 17:02:45.103064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.504 [2024-12-06 17:02:45.103069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.504 [2024-12-06 17:02:45.103073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.504 [2024-12-06 17:02:45.103084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.504 qpair failed and we were unable to recover it. 00:35:56.504 [2024-12-06 17:02:45.113055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.504 [2024-12-06 17:02:45.113099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.504 [2024-12-06 17:02:45.113112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.504 [2024-12-06 17:02:45.113117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.504 [2024-12-06 17:02:45.113122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.504 [2024-12-06 17:02:45.113133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.504 qpair failed and we were unable to recover it. 00:35:56.504 [2024-12-06 17:02:45.123067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.504 [2024-12-06 17:02:45.123108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.504 [2024-12-06 17:02:45.123118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.504 [2024-12-06 17:02:45.123123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.504 [2024-12-06 17:02:45.123128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.504 [2024-12-06 17:02:45.123138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.504 qpair failed and we were unable to recover it. 00:35:56.504 [2024-12-06 17:02:45.133107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.504 [2024-12-06 17:02:45.133152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.504 [2024-12-06 17:02:45.133162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.504 [2024-12-06 17:02:45.133171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.504 [2024-12-06 17:02:45.133175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.504 [2024-12-06 17:02:45.133186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.504 qpair failed and we were unable to recover it. 00:35:56.504 [2024-12-06 17:02:45.142991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.504 [2024-12-06 17:02:45.143031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.504 [2024-12-06 17:02:45.143040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.504 [2024-12-06 17:02:45.143045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.504 [2024-12-06 17:02:45.143050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.504 [2024-12-06 17:02:45.143060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.504 qpair failed and we were unable to recover it. 00:35:56.504 [2024-12-06 17:02:45.153133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.504 [2024-12-06 17:02:45.153179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.504 [2024-12-06 17:02:45.153189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.505 [2024-12-06 17:02:45.153194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.505 [2024-12-06 17:02:45.153199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.505 [2024-12-06 17:02:45.153210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.505 qpair failed and we were unable to recover it. 00:35:56.505 [2024-12-06 17:02:45.163172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.505 [2024-12-06 17:02:45.163209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.505 [2024-12-06 17:02:45.163218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.505 [2024-12-06 17:02:45.163224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.505 [2024-12-06 17:02:45.163229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.505 [2024-12-06 17:02:45.163239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.505 qpair failed and we were unable to recover it. 00:35:56.505 [2024-12-06 17:02:45.173184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.505 [2024-12-06 17:02:45.173248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.505 [2024-12-06 17:02:45.173258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.505 [2024-12-06 17:02:45.173263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.505 [2024-12-06 17:02:45.173268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.505 [2024-12-06 17:02:45.173282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.505 qpair failed and we were unable to recover it. 00:35:56.505 [2024-12-06 17:02:45.183219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.505 [2024-12-06 17:02:45.183266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.505 [2024-12-06 17:02:45.183275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.505 [2024-12-06 17:02:45.183280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.505 [2024-12-06 17:02:45.183285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.505 [2024-12-06 17:02:45.183295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.505 qpair failed and we were unable to recover it. 00:35:56.768 [2024-12-06 17:02:45.193254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.768 [2024-12-06 17:02:45.193304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.768 [2024-12-06 17:02:45.193313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.768 [2024-12-06 17:02:45.193318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.768 [2024-12-06 17:02:45.193323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.768 [2024-12-06 17:02:45.193333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-12-06 17:02:45.203288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.768 [2024-12-06 17:02:45.203332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.768 [2024-12-06 17:02:45.203343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.768 [2024-12-06 17:02:45.203348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.768 [2024-12-06 17:02:45.203353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.768 [2024-12-06 17:02:45.203364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-12-06 17:02:45.213318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.768 [2024-12-06 17:02:45.213364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.768 [2024-12-06 17:02:45.213374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.768 [2024-12-06 17:02:45.213379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.768 [2024-12-06 17:02:45.213384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.768 [2024-12-06 17:02:45.213394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.768 qpair failed and we were unable to recover it. 00:35:56.768 [2024-12-06 17:02:45.223355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.768 [2024-12-06 17:02:45.223443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.768 [2024-12-06 17:02:45.223453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.768 [2024-12-06 17:02:45.223458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.768 [2024-12-06 17:02:45.223463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.223473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.233353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.233399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.233409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.233414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.233419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.233429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.243410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.243451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.243461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.243466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.243471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.243481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.253395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.253453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.253462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.253467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.253472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.253482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.263407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.263450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.263462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.263467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.263472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.263482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.273486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.273559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.273568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.273573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.273578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.273589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.283507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.283553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.283563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.283568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.283572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.283582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.293518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.293559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.293569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.293574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.293578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.293588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.303536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.303577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.303587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.303592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.303599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.303609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.313453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.313527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.313536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.313541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.313546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.313556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.323598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.323639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.323648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.323653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.323658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.323668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.333631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.333669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.333678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.333683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.333688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.333698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.343671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.343709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.343718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.343724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.343728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.343738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.353711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.353751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.353761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.353766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.353770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.353780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.363687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.363732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.363742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.363747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.363752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.363762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.373747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.373786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.373795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.373801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.373806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.373816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.383747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.383785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.383798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.383804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.383808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.383820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.393795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.393840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.393862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.393868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.393873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.393888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.403878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.403952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.403970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.403977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.403982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.769 [2024-12-06 17:02:45.403996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.769 qpair failed and we were unable to recover it. 00:35:56.769 [2024-12-06 17:02:45.413841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.769 [2024-12-06 17:02:45.413882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.769 [2024-12-06 17:02:45.413893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.769 [2024-12-06 17:02:45.413898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.769 [2024-12-06 17:02:45.413903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.770 [2024-12-06 17:02:45.413914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-12-06 17:02:45.423874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.770 [2024-12-06 17:02:45.423919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.770 [2024-12-06 17:02:45.423929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.770 [2024-12-06 17:02:45.423935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.770 [2024-12-06 17:02:45.423940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.770 [2024-12-06 17:02:45.423950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-12-06 17:02:45.433898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.770 [2024-12-06 17:02:45.433938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.770 [2024-12-06 17:02:45.433948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.770 [2024-12-06 17:02:45.433953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.770 [2024-12-06 17:02:45.433963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.770 [2024-12-06 17:02:45.433974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-12-06 17:02:45.443889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.770 [2024-12-06 17:02:45.443928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.770 [2024-12-06 17:02:45.443938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.770 [2024-12-06 17:02:45.443943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.770 [2024-12-06 17:02:45.443948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.770 [2024-12-06 17:02:45.443958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.770 qpair failed and we were unable to recover it. 00:35:56.770 [2024-12-06 17:02:45.453932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:56.770 [2024-12-06 17:02:45.453971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:56.770 [2024-12-06 17:02:45.453981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:56.770 [2024-12-06 17:02:45.453987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:56.770 [2024-12-06 17:02:45.453991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:56.770 [2024-12-06 17:02:45.454002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:56.770 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.463985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.464028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.464039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.464044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.464049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.464060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.474011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.474053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.474063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.474069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.474073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.474084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.484004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.484045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.484055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.484060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.484065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.484075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.494026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.494062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.494072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.494077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.494082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.494092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.504086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.504129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.504139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.504145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.504149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.504159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.514115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.514159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.514168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.514174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.514179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.514189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.524104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.524143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.524154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.524159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.524164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.524175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.534144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.534203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.534212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.534217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.534222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.534233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.030 [2024-12-06 17:02:45.544189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.030 [2024-12-06 17:02:45.544233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.030 [2024-12-06 17:02:45.544242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.030 [2024-12-06 17:02:45.544247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.030 [2024-12-06 17:02:45.544252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.030 [2024-12-06 17:02:45.544262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.030 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.554198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.554239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.554249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.554254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.554258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.554268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.564322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.564366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.564376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.564383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.564388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.564398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.574300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.574341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.574350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.574355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.574360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.574370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.584187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.584227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.584237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.584242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.584246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.584257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.594273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.594315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.594324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.594329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.594334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.594344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.604355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.604437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.604446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.604451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.604456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.604466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.614370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.614411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.614421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.614426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.614430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.614440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.624406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.624447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.624457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.624462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.624466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.624477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.634306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.634351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.634360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.634365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.634370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.634380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.644484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.644540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.644549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.644554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.644559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.644569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.654526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.654595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.654604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.654609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.654614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.654624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.664383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.664424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.664433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.664438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.664443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.664453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.674551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.674599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.674609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.674614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.674619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.674629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.684571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.684612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.684621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.684627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.684631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.684642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.694596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.694660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.694669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.694677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.694681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.031 [2024-12-06 17:02:45.694692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.031 qpair failed and we were unable to recover it. 00:35:57.031 [2024-12-06 17:02:45.704586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.031 [2024-12-06 17:02:45.704627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.031 [2024-12-06 17:02:45.704637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.031 [2024-12-06 17:02:45.704642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.031 [2024-12-06 17:02:45.704647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.032 [2024-12-06 17:02:45.704658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.032 qpair failed and we were unable to recover it. 00:35:57.032 [2024-12-06 17:02:45.714671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.032 [2024-12-06 17:02:45.714720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.032 [2024-12-06 17:02:45.714729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.032 [2024-12-06 17:02:45.714734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.032 [2024-12-06 17:02:45.714739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.032 [2024-12-06 17:02:45.714750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.032 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.724654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.724699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.724709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.724714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.724719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.292 [2024-12-06 17:02:45.724729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.292 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.734724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.734765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.734774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.734780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.734784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.292 [2024-12-06 17:02:45.734798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.292 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.744789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.744853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.744863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.744868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.744873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.292 [2024-12-06 17:02:45.744883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.292 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.754784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.754827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.754836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.754842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.754846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.292 [2024-12-06 17:02:45.754857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.292 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.764813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.764858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.764877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.764883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.764888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.292 [2024-12-06 17:02:45.764903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.292 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.774831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.774922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.774941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.774947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.774954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.292 [2024-12-06 17:02:45.774968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.292 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.784860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.784902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.784913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.784918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.784923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.292 [2024-12-06 17:02:45.784934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.292 qpair failed and we were unable to recover it. 00:35:57.292 [2024-12-06 17:02:45.794849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.292 [2024-12-06 17:02:45.794891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.292 [2024-12-06 17:02:45.794902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.292 [2024-12-06 17:02:45.794907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.292 [2024-12-06 17:02:45.794912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.794923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.804903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.804967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.804977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.804982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.804987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.804998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.814929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.814970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.814981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.814986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.814991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.815001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.824963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.825003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.825016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.825021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.825026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.825037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.835001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.835051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.835061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.835066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.835071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.835081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.845009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.845062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.845072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.845077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.845082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.845092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.855007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.855096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.855109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.855115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.855120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.855131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.865078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.865122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.865132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.865137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.865144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.865155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.875095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.875148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.875158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.875163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.875168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.875179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.885123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.885164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.885174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.885179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.885184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.885195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.895120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.895175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.895185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.895190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.895195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.895205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.905179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.905219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.905228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.905233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.905238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.905248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.915077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.915125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.915135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.915140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.915145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.915155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.925259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.925336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.925345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.925350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.925355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.925365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.935267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.935308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.935317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.935323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.935327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.935337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.945155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.945197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.945206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.945211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.945216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.945226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.955318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.955364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.955376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.955381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.955386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.955396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.293 qpair failed and we were unable to recover it. 00:35:57.293 [2024-12-06 17:02:45.965374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.293 [2024-12-06 17:02:45.965440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.293 [2024-12-06 17:02:45.965450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.293 [2024-12-06 17:02:45.965455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.293 [2024-12-06 17:02:45.965460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.293 [2024-12-06 17:02:45.965470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.294 qpair failed and we were unable to recover it. 00:35:57.294 [2024-12-06 17:02:45.975344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.294 [2024-12-06 17:02:45.975382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.294 [2024-12-06 17:02:45.975391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.294 [2024-12-06 17:02:45.975396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.294 [2024-12-06 17:02:45.975401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.294 [2024-12-06 17:02:45.975412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.294 qpair failed and we were unable to recover it. 00:35:57.554 [2024-12-06 17:02:45.985407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.554 [2024-12-06 17:02:45.985450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.554 [2024-12-06 17:02:45.985459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.554 [2024-12-06 17:02:45.985465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.554 [2024-12-06 17:02:45.985469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.554 [2024-12-06 17:02:45.985479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.554 qpair failed and we were unable to recover it. 00:35:57.554 [2024-12-06 17:02:45.995450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.554 [2024-12-06 17:02:45.995540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.554 [2024-12-06 17:02:45.995550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.554 [2024-12-06 17:02:45.995555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.554 [2024-12-06 17:02:45.995564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.554 [2024-12-06 17:02:45.995575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.554 qpair failed and we were unable to recover it. 00:35:57.554 [2024-12-06 17:02:46.005447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.554 [2024-12-06 17:02:46.005490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.554 [2024-12-06 17:02:46.005521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.554 [2024-12-06 17:02:46.005529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.554 [2024-12-06 17:02:46.005534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.554 [2024-12-06 17:02:46.005552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.554 qpair failed and we were unable to recover it. 00:35:57.554 [2024-12-06 17:02:46.015444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.554 [2024-12-06 17:02:46.015484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.554 [2024-12-06 17:02:46.015495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.554 [2024-12-06 17:02:46.015500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.554 [2024-12-06 17:02:46.015505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.554 [2024-12-06 17:02:46.015515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.554 qpair failed and we were unable to recover it. 00:35:57.554 [2024-12-06 17:02:46.025507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.554 [2024-12-06 17:02:46.025552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.554 [2024-12-06 17:02:46.025561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.554 [2024-12-06 17:02:46.025567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.554 [2024-12-06 17:02:46.025571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.554 [2024-12-06 17:02:46.025582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.554 qpair failed and we were unable to recover it. 00:35:57.554 [2024-12-06 17:02:46.035529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.554 [2024-12-06 17:02:46.035608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.554 [2024-12-06 17:02:46.035618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.554 [2024-12-06 17:02:46.035623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.554 [2024-12-06 17:02:46.035627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.554 [2024-12-06 17:02:46.035638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.554 qpair failed and we were unable to recover it. 00:35:57.554 [2024-12-06 17:02:46.045561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.554 [2024-12-06 17:02:46.045600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.554 [2024-12-06 17:02:46.045610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.045616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.045621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.045631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.055586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.055627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.055636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.055641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.055646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.055657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.065602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.065641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.065650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.065656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.065661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.065671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.075664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.075708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.075718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.075724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.075729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.075740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.085664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.085713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.085725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.085730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.085735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.085745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.095561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.095622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.095632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.095637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.095642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.095652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.105734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.105778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.105788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.105793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.105798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.105808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.115747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.115809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.115819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.115824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.115829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.115839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.125766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.125804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.125814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.125821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.125826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.125836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.135796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.135840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.135850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.135855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.135860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.135870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.145833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.145871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.145880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.145885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.145890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.145900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.155862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.155912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.155921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.155927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.155931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.155941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.165875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.165912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.165922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.165928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.555 [2024-12-06 17:02:46.165932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.555 [2024-12-06 17:02:46.165945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.555 qpair failed and we were unable to recover it. 00:35:57.555 [2024-12-06 17:02:46.175895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.555 [2024-12-06 17:02:46.175968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.555 [2024-12-06 17:02:46.175978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.555 [2024-12-06 17:02:46.175984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.556 [2024-12-06 17:02:46.175989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.556 [2024-12-06 17:02:46.175999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.556 qpair failed and we were unable to recover it. 00:35:57.556 [2024-12-06 17:02:46.185945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.556 [2024-12-06 17:02:46.186013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.556 [2024-12-06 17:02:46.186022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.556 [2024-12-06 17:02:46.186028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.556 [2024-12-06 17:02:46.186033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.556 [2024-12-06 17:02:46.186043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.556 qpair failed and we were unable to recover it. 00:35:57.556 [2024-12-06 17:02:46.195970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.556 [2024-12-06 17:02:46.196041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.556 [2024-12-06 17:02:46.196052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.556 [2024-12-06 17:02:46.196057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.556 [2024-12-06 17:02:46.196062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.556 [2024-12-06 17:02:46.196072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.556 qpair failed and we were unable to recover it. 00:35:57.556 [2024-12-06 17:02:46.205993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.556 [2024-12-06 17:02:46.206033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.556 [2024-12-06 17:02:46.206044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.556 [2024-12-06 17:02:46.206049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.556 [2024-12-06 17:02:46.206054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.556 [2024-12-06 17:02:46.206064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.556 qpair failed and we were unable to recover it. 00:35:57.556 [2024-12-06 17:02:46.215988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.556 [2024-12-06 17:02:46.216034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.556 [2024-12-06 17:02:46.216044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.556 [2024-12-06 17:02:46.216050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.556 [2024-12-06 17:02:46.216054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.556 [2024-12-06 17:02:46.216065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.556 qpair failed and we were unable to recover it. 00:35:57.556 [2024-12-06 17:02:46.226025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.556 [2024-12-06 17:02:46.226064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.556 [2024-12-06 17:02:46.226074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.556 [2024-12-06 17:02:46.226080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.556 [2024-12-06 17:02:46.226084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.556 [2024-12-06 17:02:46.226094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.556 qpair failed and we were unable to recover it. 00:35:57.556 [2024-12-06 17:02:46.236080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.556 [2024-12-06 17:02:46.236124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.556 [2024-12-06 17:02:46.236134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.556 [2024-12-06 17:02:46.236139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.556 [2024-12-06 17:02:46.236144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.556 [2024-12-06 17:02:46.236154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.556 qpair failed and we were unable to recover it. 00:35:57.816 [2024-12-06 17:02:46.246082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.816 [2024-12-06 17:02:46.246127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.816 [2024-12-06 17:02:46.246138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.816 [2024-12-06 17:02:46.246143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.816 [2024-12-06 17:02:46.246148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.816 [2024-12-06 17:02:46.246159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.816 qpair failed and we were unable to recover it. 00:35:57.816 [2024-12-06 17:02:46.256127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.816 [2024-12-06 17:02:46.256163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.816 [2024-12-06 17:02:46.256173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.816 [2024-12-06 17:02:46.256181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.816 [2024-12-06 17:02:46.256186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.816 [2024-12-06 17:02:46.256196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.816 qpair failed and we were unable to recover it. 00:35:57.816 [2024-12-06 17:02:46.266155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.816 [2024-12-06 17:02:46.266195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.816 [2024-12-06 17:02:46.266205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.816 [2024-12-06 17:02:46.266210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.816 [2024-12-06 17:02:46.266215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.816 [2024-12-06 17:02:46.266225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.816 qpair failed and we were unable to recover it. 00:35:57.816 [2024-12-06 17:02:46.276241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.816 [2024-12-06 17:02:46.276288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.816 [2024-12-06 17:02:46.276298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.816 [2024-12-06 17:02:46.276303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.816 [2024-12-06 17:02:46.276308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.816 [2024-12-06 17:02:46.276318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.816 qpair failed and we were unable to recover it. 00:35:57.816 [2024-12-06 17:02:46.286202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.816 [2024-12-06 17:02:46.286262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.816 [2024-12-06 17:02:46.286272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.816 [2024-12-06 17:02:46.286277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.816 [2024-12-06 17:02:46.286282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.816 [2024-12-06 17:02:46.286292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.816 qpair failed and we were unable to recover it. 00:35:57.816 [2024-12-06 17:02:46.296210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.816 [2024-12-06 17:02:46.296254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.816 [2024-12-06 17:02:46.296264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.816 [2024-12-06 17:02:46.296269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.816 [2024-12-06 17:02:46.296274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.816 [2024-12-06 17:02:46.296290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.816 qpair failed and we were unable to recover it. 00:35:57.816 [2024-12-06 17:02:46.306255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.816 [2024-12-06 17:02:46.306294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.306304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.306309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.306314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.306324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.316272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.316319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.316329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.316334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.316339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.316349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.326279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.326315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.326325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.326330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.326334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.326344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.336333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.336408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.336417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.336422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.336427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.336437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.346378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.346423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.346432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.346438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.346442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.346452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.356419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.356460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.356470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.356475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.356480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.356490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.366427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.366464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.366476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.366482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.366488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.366499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.376451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.376495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.376504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.376510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.376514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.376525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.386375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.386421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.386433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.386438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.386443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.386453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.396421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.396464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.396473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.396478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.396483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.396493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.406483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.406523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.406532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.406538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.406543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.406553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.416564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.416631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.416641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.416646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.416651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.416661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.426580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.426624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.426635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.426640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.426647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.426659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.436484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.436540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.436550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.436555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.436560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.436571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.446632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.446687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.446696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.446702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.446707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.446717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.456660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.456699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.456709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.456714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.456719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.456729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.466690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.466732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.466741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.466746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.466751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.817 [2024-12-06 17:02:46.466762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.817 qpair failed and we were unable to recover it. 00:35:57.817 [2024-12-06 17:02:46.476731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.817 [2024-12-06 17:02:46.476821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.817 [2024-12-06 17:02:46.476830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.817 [2024-12-06 17:02:46.476836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.817 [2024-12-06 17:02:46.476840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.818 [2024-12-06 17:02:46.476851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.818 qpair failed and we were unable to recover it. 00:35:57.818 [2024-12-06 17:02:46.486737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.818 [2024-12-06 17:02:46.486820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.818 [2024-12-06 17:02:46.486830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.818 [2024-12-06 17:02:46.486835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.818 [2024-12-06 17:02:46.486841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.818 [2024-12-06 17:02:46.486851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.818 qpair failed and we were unable to recover it. 00:35:57.818 [2024-12-06 17:02:46.496769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.818 [2024-12-06 17:02:46.496807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.818 [2024-12-06 17:02:46.496817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.818 [2024-12-06 17:02:46.496822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.818 [2024-12-06 17:02:46.496827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.818 [2024-12-06 17:02:46.496837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.818 qpair failed and we were unable to recover it. 00:35:57.818 [2024-12-06 17:02:46.506787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:57.818 [2024-12-06 17:02:46.506826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:57.818 [2024-12-06 17:02:46.506835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:57.818 [2024-12-06 17:02:46.506840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:57.818 [2024-12-06 17:02:46.506845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:57.818 [2024-12-06 17:02:46.506856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:57.818 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.516830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.516886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.516898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.516903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.516908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.516918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.526820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.526874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.526883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.526888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.526893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.526903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.536875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.536921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.536939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.536946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.536951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.536966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.546907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.546948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.546959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.546964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.546969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.546980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.556994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.557039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.557049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.557054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.557062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.557073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.566953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.566994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.567004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.567009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.567014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.567024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.576977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.577014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.577024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.577029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.577034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.577044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.587013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.587053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.587063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.587068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.587073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.587083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.597049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.597124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.597134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.597139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.597144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.597155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.607058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.607106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.607116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.607121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.607126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.607136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.617088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.617128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.617137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.617142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.617148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.617158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.627124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.627166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.627175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.627180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.627185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.627196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.078 qpair failed and we were unable to recover it. 00:35:58.078 [2024-12-06 17:02:46.637163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.078 [2024-12-06 17:02:46.637217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.078 [2024-12-06 17:02:46.637227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.078 [2024-12-06 17:02:46.637232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.078 [2024-12-06 17:02:46.637238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.078 [2024-12-06 17:02:46.637248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.647167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.647207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.647219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.647225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.647229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.647240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.657065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.657108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.657118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.657123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.657128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.657138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.667196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.667237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.667247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.667252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.667257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.667267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.677302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.677347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.677357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.677363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.677368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.677378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.687292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.687332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.687342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.687350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.687354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.687365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.697305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.697353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.697362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.697368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.697372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.697382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.707301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.707345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.707354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.707360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.707365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.707375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.717391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.717443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.717453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.717458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.717463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.717473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.727374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.727422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.727431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.727437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.727441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.727454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.737403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.737442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.737452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.737457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.737462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.737472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.747494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.747536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.747546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.747551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.747556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.747565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.757461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.757525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.757534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.757540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.757545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.757554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.079 [2024-12-06 17:02:46.767507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.079 [2024-12-06 17:02:46.767577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.079 [2024-12-06 17:02:46.767586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.079 [2024-12-06 17:02:46.767591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.079 [2024-12-06 17:02:46.767596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.079 [2024-12-06 17:02:46.767607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.079 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.777526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.777569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.777579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.777584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.777589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.777599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.787510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.787581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.787591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.787596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.787600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.787611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.797586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.797629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.797638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.797643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.797648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.797658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.807597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.807652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.807662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.807667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.807672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.807682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.817598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.817638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.817647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.817655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.817660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.817670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.827652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.827744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.827754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.827759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.827764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.827775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.837686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.837778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.837788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.837794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.837799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.837809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.847708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.847747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.847756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.847762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.847766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.340 [2024-12-06 17:02:46.847777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.340 qpair failed and we were unable to recover it. 00:35:58.340 [2024-12-06 17:02:46.857727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.340 [2024-12-06 17:02:46.857767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.340 [2024-12-06 17:02:46.857777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.340 [2024-12-06 17:02:46.857782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.340 [2024-12-06 17:02:46.857787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.857800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.867748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.867792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.867802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.867808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.867813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.867824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.877804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.877849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.877859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.877864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.877869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.877880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.887777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.887818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.887827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.887832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.887837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.887848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.897865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.897944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.897954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.897959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.897964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.897974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.907863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.907919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.907929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.907934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.907939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.907949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.917906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.917951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.917960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.917966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.917971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.917981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.927887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.927925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.927935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.927940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.927944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.927955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.937934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.937974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.937983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.937989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.937994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.938004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.947985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.948025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.948037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.948043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.948047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.948057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.957921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.957961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.957970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.957975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.957980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.957990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.968029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.968068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.968077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.968083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.968088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.968098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.978043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.978092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.978104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.978109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.978114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.341 [2024-12-06 17:02:46.978124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.341 qpair failed and we were unable to recover it. 00:35:58.341 [2024-12-06 17:02:46.988036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.341 [2024-12-06 17:02:46.988075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.341 [2024-12-06 17:02:46.988085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.341 [2024-12-06 17:02:46.988090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.341 [2024-12-06 17:02:46.988102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.342 [2024-12-06 17:02:46.988113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.342 qpair failed and we were unable to recover it. 00:35:58.342 [2024-12-06 17:02:46.998116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.342 [2024-12-06 17:02:46.998157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.342 [2024-12-06 17:02:46.998167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.342 [2024-12-06 17:02:46.998172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.342 [2024-12-06 17:02:46.998178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.342 [2024-12-06 17:02:46.998188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.342 qpair failed and we were unable to recover it. 00:35:58.342 [2024-12-06 17:02:47.008124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.342 [2024-12-06 17:02:47.008167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.342 [2024-12-06 17:02:47.008177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.342 [2024-12-06 17:02:47.008182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.342 [2024-12-06 17:02:47.008187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.342 [2024-12-06 17:02:47.008197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.342 qpair failed and we were unable to recover it. 00:35:58.342 [2024-12-06 17:02:47.018156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.342 [2024-12-06 17:02:47.018192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.342 [2024-12-06 17:02:47.018202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.342 [2024-12-06 17:02:47.018207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.342 [2024-12-06 17:02:47.018211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.342 [2024-12-06 17:02:47.018222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.342 qpair failed and we were unable to recover it. 00:35:58.342 [2024-12-06 17:02:47.028048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.342 [2024-12-06 17:02:47.028089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.342 [2024-12-06 17:02:47.028099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.342 [2024-12-06 17:02:47.028107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.342 [2024-12-06 17:02:47.028112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.342 [2024-12-06 17:02:47.028122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.342 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.038204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.038249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.038259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.038264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.038268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.038278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.048238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.048275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.048284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.048290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.048295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.048305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.058264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.058308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.058317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.058323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.058327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.058337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.068158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.068196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.068206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.068211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.068216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.068226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.078268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.078312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.078324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.078329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.078334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.078344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.088338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.088381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.088390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.088396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.088400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.088410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.098381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.098451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.098460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.098465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.098470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.098480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.108391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.108433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.108443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.108448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.108453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.108464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.118515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.118563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.118572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.118577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.118584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.118595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.128488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.128530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.128540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.128545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.128550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.128560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.138374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.138436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.138445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.138451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.138455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.138465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.148530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.148606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.148615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.148620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.148625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.602 [2024-12-06 17:02:47.148635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.602 qpair failed and we were unable to recover it. 00:35:58.602 [2024-12-06 17:02:47.158560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.602 [2024-12-06 17:02:47.158602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.602 [2024-12-06 17:02:47.158613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.602 [2024-12-06 17:02:47.158618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.602 [2024-12-06 17:02:47.158623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.158633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.168437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.168486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.168496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.168502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.168507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.168517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.178580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.178640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.178649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.178654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.178659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.178669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.188643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.188683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.188693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.188698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.188702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.188712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.198669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.198713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.198723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.198728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.198732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.198742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.208685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.208728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.208742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.208747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.208752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.208763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.218578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.218627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.218637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.218642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.218647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.218657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.228743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.228786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.228795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.228800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.228805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.228815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.238756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.238801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.238811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.238816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.238821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.238831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.248780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.248818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.248828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.248835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.248840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.248850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.258787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.258827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.258837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.258842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.258847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.258857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.268829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.268867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.268877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.268882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.268887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.268897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.278886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.278927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.278938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.278943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.278948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.278958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.603 [2024-12-06 17:02:47.288910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.603 [2024-12-06 17:02:47.288948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.603 [2024-12-06 17:02:47.288959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.603 [2024-12-06 17:02:47.288964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.603 [2024-12-06 17:02:47.288969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.603 [2024-12-06 17:02:47.288982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.603 qpair failed and we were unable to recover it. 00:35:58.864 [2024-12-06 17:02:47.298784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.864 [2024-12-06 17:02:47.298823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.864 [2024-12-06 17:02:47.298833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.864 [2024-12-06 17:02:47.298838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.864 [2024-12-06 17:02:47.298842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.864 [2024-12-06 17:02:47.298853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.864 qpair failed and we were unable to recover it. 00:35:58.864 [2024-12-06 17:02:47.308825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.864 [2024-12-06 17:02:47.308863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.864 [2024-12-06 17:02:47.308873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.864 [2024-12-06 17:02:47.308878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.864 [2024-12-06 17:02:47.308883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.864 [2024-12-06 17:02:47.308893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.864 qpair failed and we were unable to recover it. 00:35:58.864 [2024-12-06 17:02:47.319004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.864 [2024-12-06 17:02:47.319048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.864 [2024-12-06 17:02:47.319057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.864 [2024-12-06 17:02:47.319062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.864 [2024-12-06 17:02:47.319067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.864 [2024-12-06 17:02:47.319078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.864 qpair failed and we were unable to recover it. 00:35:58.864 [2024-12-06 17:02:47.329007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.864 [2024-12-06 17:02:47.329048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.864 [2024-12-06 17:02:47.329058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.864 [2024-12-06 17:02:47.329063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.864 [2024-12-06 17:02:47.329068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.864 [2024-12-06 17:02:47.329078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.864 qpair failed and we were unable to recover it. 00:35:58.864 [2024-12-06 17:02:47.339026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.864 [2024-12-06 17:02:47.339067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.864 [2024-12-06 17:02:47.339077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.864 [2024-12-06 17:02:47.339082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.864 [2024-12-06 17:02:47.339086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.864 [2024-12-06 17:02:47.339097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.864 qpair failed and we were unable to recover it. 00:35:58.864 [2024-12-06 17:02:47.349066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.349111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.349120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.349126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.349130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.349140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.359102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.359143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.359152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.359158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.359162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.359172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.368986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.369026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.369035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.369040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.369045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.369054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.379147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.379188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.379197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.379205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.379209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.379220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.389174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.389223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.389233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.389238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.389243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.389252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.399202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.399245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.399255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.399260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.399265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.399275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.409237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.409277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.409286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.409291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.409296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.409306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.419248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.419292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.419302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.419307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.419312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.419324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.429277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.429318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.429327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.429333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.429337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.429348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.439332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.439373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.439383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.439388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.439393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.439403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.449344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.449382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.449391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.449396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.449401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.449411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.459412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.865 [2024-12-06 17:02:47.459450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.865 [2024-12-06 17:02:47.459460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.865 [2024-12-06 17:02:47.459465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.865 [2024-12-06 17:02:47.459470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.865 [2024-12-06 17:02:47.459479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.865 qpair failed and we were unable to recover it. 00:35:58.865 [2024-12-06 17:02:47.469391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.469453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.469462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.469468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.469472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.469482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.479416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.479466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.479475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.479480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.479485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.479495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.489436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.489484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.489493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.489498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.489503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.489513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.499457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.499494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.499504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.499509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.499514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.499524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.509492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.509536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.509547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.509552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.509557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.509567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.519380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.519420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.519430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.519435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.519440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.519449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.529542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.529577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.529586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.529592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.529596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.529606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.539551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.539587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.539596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.539601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.539606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.539616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:58.866 [2024-12-06 17:02:47.549590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:58.866 [2024-12-06 17:02:47.549632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:58.866 [2024-12-06 17:02:47.549642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:58.866 [2024-12-06 17:02:47.549647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:58.866 [2024-12-06 17:02:47.549654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:58.866 [2024-12-06 17:02:47.549664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:58.866 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.559634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.559676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.559685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.559690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.559695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.559705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.569690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.569736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.569745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.569751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.569755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.569766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.579692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.579733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.579742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.579748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.579752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.579762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.589737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.589809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.589818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.589824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.589828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.589838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.599772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.599814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.599824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.599829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.599834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.599844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.609767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.609856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.609866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.609871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.609876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.609886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.619785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.619823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.619833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.619838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.619843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.619853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.629800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.629882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.629891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.629896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.629901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.629911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.639858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.639901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.639914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.639919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.639923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.639933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.649874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.649909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.128 [2024-12-06 17:02:47.649919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.128 [2024-12-06 17:02:47.649924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.128 [2024-12-06 17:02:47.649929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.128 [2024-12-06 17:02:47.649939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.128 qpair failed and we were unable to recover it. 00:35:59.128 [2024-12-06 17:02:47.659848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.128 [2024-12-06 17:02:47.659890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.659900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.659905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.659910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.659920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.669915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.669954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.669963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.669968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.669973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.669983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.679814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.679857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.679867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.679872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.679882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.679893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.689980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.690016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.690026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.690031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.690035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.690045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.699977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.700017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.700027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.700032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.700037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.700047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.710028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.710071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.710080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.710085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.710090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.710103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.720062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.720104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.720114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.720119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.720123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.720133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.730067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.730106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.730115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.730120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.730125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.730135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.740109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.740149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.740158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.740163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.740168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.740178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.750135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.750184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.750193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.750198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.750202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.750212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.760178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.760223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.760233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.760237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.760242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.760253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.770169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.770206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.770219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.770224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.770229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.770239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.780182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.780219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.780229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.780234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.780239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.780249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.790108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.790149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.790159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.790164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.790169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.790179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.800289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.800332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.800341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.800346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.800351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.800362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.129 [2024-12-06 17:02:47.810336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.129 [2024-12-06 17:02:47.810373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.129 [2024-12-06 17:02:47.810382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.129 [2024-12-06 17:02:47.810397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.129 [2024-12-06 17:02:47.810402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.129 [2024-12-06 17:02:47.810412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.129 qpair failed and we were unable to recover it. 00:35:59.390 [2024-12-06 17:02:47.820323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.390 [2024-12-06 17:02:47.820368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.390 [2024-12-06 17:02:47.820378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.390 [2024-12-06 17:02:47.820383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.390 [2024-12-06 17:02:47.820388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.390 [2024-12-06 17:02:47.820398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.390 qpair failed and we were unable to recover it. 00:35:59.390 [2024-12-06 17:02:47.830341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.390 [2024-12-06 17:02:47.830394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.390 [2024-12-06 17:02:47.830405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.390 [2024-12-06 17:02:47.830410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.390 [2024-12-06 17:02:47.830415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.390 [2024-12-06 17:02:47.830426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.390 qpair failed and we were unable to recover it. 00:35:59.390 [2024-12-06 17:02:47.840400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.390 [2024-12-06 17:02:47.840446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.390 [2024-12-06 17:02:47.840456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.390 [2024-12-06 17:02:47.840461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.390 [2024-12-06 17:02:47.840466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.390 [2024-12-06 17:02:47.840476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.390 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.850407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.850445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.850454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.850459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.850464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.850477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.860289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.860332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.860341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.860346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.860351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.860361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.870438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.870481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.870491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.870496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.870501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.870511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.880533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.880612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.880622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.880627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.880632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.880641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.890524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.890569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.890579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.890584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.890588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.890599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.900532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.900577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.900587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.900592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.900597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.900607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.910581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.910671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.910681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.910686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.910691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.910701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.920607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.920650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.920660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.920665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.920670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.920680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.930641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.930679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.930688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.930694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.930698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.930708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.940656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.940701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.940711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.940719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.940724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.940734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.950660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.950703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.950713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.950718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.950723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.950733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.960742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.960837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.960846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.960852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.960857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.960868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.970715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.970759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.970769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.970775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.970780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.970790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.980779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.391 [2024-12-06 17:02:47.980817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.391 [2024-12-06 17:02:47.980826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.391 [2024-12-06 17:02:47.980832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.391 [2024-12-06 17:02:47.980836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.391 [2024-12-06 17:02:47.980849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.391 qpair failed and we were unable to recover it. 00:35:59.391 [2024-12-06 17:02:47.990659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:47.990702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:47.990712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:47.990717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:47.990722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:47.990732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.000823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.000879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.000888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.000894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.000899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.000909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.010831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.010909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.010928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.010935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.010940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.010955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.020854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.020891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.020902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.020907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.020912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.020923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.030887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.030932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.030951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.030957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.030962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.030977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.040925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.040970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.040980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.040986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.040990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.041002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.050950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.050993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.051003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.051008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.051012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.051023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.060975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.061054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.061063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.061068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.061073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.061083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.070968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.071010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.071031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.071037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.071042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.071056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.392 [2024-12-06 17:02:48.081048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.392 [2024-12-06 17:02:48.081089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.392 [2024-12-06 17:02:48.081102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.392 [2024-12-06 17:02:48.081108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.392 [2024-12-06 17:02:48.081113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.392 [2024-12-06 17:02:48.081123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.392 qpair failed and we were unable to recover it. 00:35:59.652 [2024-12-06 17:02:48.090936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.652 [2024-12-06 17:02:48.090999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.652 [2024-12-06 17:02:48.091009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.652 [2024-12-06 17:02:48.091014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.652 [2024-12-06 17:02:48.091019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.652 [2024-12-06 17:02:48.091030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.652 qpair failed and we were unable to recover it. 00:35:59.652 [2024-12-06 17:02:48.101093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.652 [2024-12-06 17:02:48.101151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.652 [2024-12-06 17:02:48.101162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.652 [2024-12-06 17:02:48.101167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.652 [2024-12-06 17:02:48.101172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.652 [2024-12-06 17:02:48.101183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.652 qpair failed and we were unable to recover it. 00:35:59.652 [2024-12-06 17:02:48.111174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.652 [2024-12-06 17:02:48.111237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.652 [2024-12-06 17:02:48.111247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.652 [2024-12-06 17:02:48.111252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.652 [2024-12-06 17:02:48.111260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.652 [2024-12-06 17:02:48.111270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.652 qpair failed and we were unable to recover it. 00:35:59.652 [2024-12-06 17:02:48.121147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.652 [2024-12-06 17:02:48.121191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.652 [2024-12-06 17:02:48.121202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.652 [2024-12-06 17:02:48.121207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.652 [2024-12-06 17:02:48.121212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.652 [2024-12-06 17:02:48.121223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.652 qpair failed and we were unable to recover it. 00:35:59.652 [2024-12-06 17:02:48.131188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.652 [2024-12-06 17:02:48.131241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.652 [2024-12-06 17:02:48.131251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.652 [2024-12-06 17:02:48.131256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.652 [2024-12-06 17:02:48.131261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.652 [2024-12-06 17:02:48.131271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.652 qpair failed and we were unable to recover it. 00:35:59.652 [2024-12-06 17:02:48.141207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.652 [2024-12-06 17:02:48.141249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.652 [2024-12-06 17:02:48.141259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.652 [2024-12-06 17:02:48.141264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.652 [2024-12-06 17:02:48.141269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.652 [2024-12-06 17:02:48.141280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.652 qpair failed and we were unable to recover it. 00:35:59.652 [2024-12-06 17:02:48.151258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.652 [2024-12-06 17:02:48.151337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.652 [2024-12-06 17:02:48.151346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.652 [2024-12-06 17:02:48.151351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.652 [2024-12-06 17:02:48.151356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.151367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.161261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.161305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.161315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.161319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.161325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.161335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.171287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.171378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.171388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.171393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.171398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.171409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.181312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.181349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.181359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.181364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.181369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.181379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.191354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.191402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.191412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.191417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.191422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.191432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.201232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.201276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.201289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.201295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.201300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.201311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.211407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.211449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.211459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.211465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.211470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.211480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.221405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.221443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.221453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.221458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.221463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.221473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.231436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.231480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.231489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.231494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.231499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.231509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.241470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.241513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.241523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.241528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.241536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.241546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.251503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.251543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.251553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.251558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.251563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.251573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.261534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.261574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.261584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.261589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.653 [2024-12-06 17:02:48.261594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.653 [2024-12-06 17:02:48.261604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.653 qpair failed and we were unable to recover it. 00:35:59.653 [2024-12-06 17:02:48.271526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.653 [2024-12-06 17:02:48.271568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.653 [2024-12-06 17:02:48.271578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.653 [2024-12-06 17:02:48.271583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.271588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.271599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.654 [2024-12-06 17:02:48.281593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.654 [2024-12-06 17:02:48.281661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.654 [2024-12-06 17:02:48.281671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.654 [2024-12-06 17:02:48.281676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.281681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.281691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.654 [2024-12-06 17:02:48.291619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.654 [2024-12-06 17:02:48.291670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.654 [2024-12-06 17:02:48.291680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.654 [2024-12-06 17:02:48.291686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.291690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.291701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.654 [2024-12-06 17:02:48.301638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.654 [2024-12-06 17:02:48.301678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.654 [2024-12-06 17:02:48.301687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.654 [2024-12-06 17:02:48.301692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.301697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.301708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.654 [2024-12-06 17:02:48.311686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.654 [2024-12-06 17:02:48.311783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.654 [2024-12-06 17:02:48.311792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.654 [2024-12-06 17:02:48.311798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.311803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.311813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.654 [2024-12-06 17:02:48.321672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.654 [2024-12-06 17:02:48.321717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.654 [2024-12-06 17:02:48.321727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.654 [2024-12-06 17:02:48.321732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.321737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.321747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.654 [2024-12-06 17:02:48.331712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.654 [2024-12-06 17:02:48.331756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.654 [2024-12-06 17:02:48.331766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.654 [2024-12-06 17:02:48.331771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.331776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.331786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.654 [2024-12-06 17:02:48.341763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.654 [2024-12-06 17:02:48.341807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.654 [2024-12-06 17:02:48.341817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.654 [2024-12-06 17:02:48.341822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.654 [2024-12-06 17:02:48.341827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.654 [2024-12-06 17:02:48.341837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.654 qpair failed and we were unable to recover it. 00:35:59.914 [2024-12-06 17:02:48.351773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.914 [2024-12-06 17:02:48.351828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.914 [2024-12-06 17:02:48.351837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.914 [2024-12-06 17:02:48.351842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.914 [2024-12-06 17:02:48.351847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.914 [2024-12-06 17:02:48.351857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.914 qpair failed and we were unable to recover it. 00:35:59.914 [2024-12-06 17:02:48.361862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.914 [2024-12-06 17:02:48.361930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.914 [2024-12-06 17:02:48.361948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.914 [2024-12-06 17:02:48.361955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.914 [2024-12-06 17:02:48.361960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.914 [2024-12-06 17:02:48.361974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.914 qpair failed and we were unable to recover it. 00:35:59.914 [2024-12-06 17:02:48.371695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.914 [2024-12-06 17:02:48.371736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.914 [2024-12-06 17:02:48.371747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.914 [2024-12-06 17:02:48.371755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.914 [2024-12-06 17:02:48.371761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.914 [2024-12-06 17:02:48.371772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.914 qpair failed and we were unable to recover it. 00:35:59.914 [2024-12-06 17:02:48.381871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.914 [2024-12-06 17:02:48.381909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.914 [2024-12-06 17:02:48.381919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.914 [2024-12-06 17:02:48.381924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.914 [2024-12-06 17:02:48.381929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.914 [2024-12-06 17:02:48.381940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.914 qpair failed and we were unable to recover it. 00:35:59.914 [2024-12-06 17:02:48.391904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.914 [2024-12-06 17:02:48.391950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.914 [2024-12-06 17:02:48.391968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.914 [2024-12-06 17:02:48.391975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.914 [2024-12-06 17:02:48.391980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.914 [2024-12-06 17:02:48.391994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.914 qpair failed and we were unable to recover it. 00:35:59.914 [2024-12-06 17:02:48.401993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.914 [2024-12-06 17:02:48.402035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.914 [2024-12-06 17:02:48.402047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.914 [2024-12-06 17:02:48.402052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.914 [2024-12-06 17:02:48.402057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.914 [2024-12-06 17:02:48.402069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.914 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.411985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.412026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.412036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.412041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.412046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.412064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.421857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.421896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.421906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.421911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.421916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.421927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.432026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.432069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.432078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.432084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.432088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.432099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.442015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.442060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.442070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.442075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.442080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.442090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.451927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.451967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.451976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.451982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.451987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.451997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.462048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.462092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.462105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.462110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.462115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.462125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.472128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.472172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.472181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.472187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.472192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.472202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.482134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.482171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.482182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.482187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.482191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.482202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.492162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.492213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.492222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.492227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.492232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.492242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.502192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.502242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.502251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.502259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.502264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.502274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.512206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.512252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.512261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.512266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.512271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.512281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.522256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.522301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.522310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.522315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.522320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.522330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.532253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.532293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.532302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.532307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.532312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.532322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.542236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.542277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.542286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.542291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.542296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.542309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.552337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.552403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.552413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.552418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.552423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.552433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.562262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.562302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.562312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.562317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.562322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.562332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.572342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.572399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.572409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.572414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.572419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.572429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.582376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.582415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.582425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.582430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.582435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.582446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.592450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.592493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.592503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.592508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.592512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.592522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:35:59.915 [2024-12-06 17:02:48.602469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:59.915 [2024-12-06 17:02:48.602546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:59.915 [2024-12-06 17:02:48.602557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:59.915 [2024-12-06 17:02:48.602562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:59.915 [2024-12-06 17:02:48.602567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:35:59.915 [2024-12-06 17:02:48.602576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:35:59.915 qpair failed and we were unable to recover it. 00:36:00.175 [2024-12-06 17:02:48.612488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.175 [2024-12-06 17:02:48.612565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.175 [2024-12-06 17:02:48.612574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.175 [2024-12-06 17:02:48.612580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.175 [2024-12-06 17:02:48.612584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.175 [2024-12-06 17:02:48.612594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.175 qpair failed and we were unable to recover it. 00:36:00.175 [2024-12-06 17:02:48.622395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.175 [2024-12-06 17:02:48.622440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.175 [2024-12-06 17:02:48.622449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.175 [2024-12-06 17:02:48.622454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.175 [2024-12-06 17:02:48.622459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.175 [2024-12-06 17:02:48.622469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.175 qpair failed and we were unable to recover it. 00:36:00.175 [2024-12-06 17:02:48.632505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.175 [2024-12-06 17:02:48.632547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.175 [2024-12-06 17:02:48.632559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.175 [2024-12-06 17:02:48.632564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.175 [2024-12-06 17:02:48.632568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.175 [2024-12-06 17:02:48.632579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.175 qpair failed and we were unable to recover it. 00:36:00.175 [2024-12-06 17:02:48.642446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.175 [2024-12-06 17:02:48.642490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.175 [2024-12-06 17:02:48.642500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.175 [2024-12-06 17:02:48.642505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.175 [2024-12-06 17:02:48.642509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.175 [2024-12-06 17:02:48.642520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.175 qpair failed and we were unable to recover it. 00:36:00.175 [2024-12-06 17:02:48.652606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.175 [2024-12-06 17:02:48.652648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.175 [2024-12-06 17:02:48.652657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.175 [2024-12-06 17:02:48.652663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.175 [2024-12-06 17:02:48.652667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.175 [2024-12-06 17:02:48.652677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.175 qpair failed and we were unable to recover it. 00:36:00.175 [2024-12-06 17:02:48.662612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.662665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.662674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.662679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.662684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.662693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.672651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.672696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.672706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.672711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.672719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.672729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.682682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.682724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.682733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.682739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.682744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.682754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.692705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.692744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.692754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.692759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.692764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.692774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.702718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.702762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.702771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.702776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.702781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.702791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.712752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.712793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.712803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.712808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.712813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.712823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.722788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.722832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.722842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.722847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.722852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.722862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.732790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.732830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.732839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.732844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.732849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.732859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.742831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.742867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.742876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.742882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.742887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.742897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.752850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.752891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.752901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.752906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.752911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.752921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.762896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.762942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.762955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.762960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.762965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.762976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.772912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.772951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.772960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.772966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.772970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.772981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.782923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.782972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.176 [2024-12-06 17:02:48.782982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.176 [2024-12-06 17:02:48.782987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.176 [2024-12-06 17:02:48.782992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.176 [2024-12-06 17:02:48.783002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.176 qpair failed and we were unable to recover it. 00:36:00.176 [2024-12-06 17:02:48.792981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.176 [2024-12-06 17:02:48.793027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.793036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.793042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.793046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.793056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.177 [2024-12-06 17:02:48.802967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.177 [2024-12-06 17:02:48.803006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.803015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.803020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.803028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.803038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.177 [2024-12-06 17:02:48.812905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.177 [2024-12-06 17:02:48.812968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.812977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.812982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.812987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.812997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.177 [2024-12-06 17:02:48.822897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.177 [2024-12-06 17:02:48.822937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.822946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.822951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.822956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.822966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.177 [2024-12-06 17:02:48.833062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.177 [2024-12-06 17:02:48.833106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.833115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.833121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.833125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.833135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.177 [2024-12-06 17:02:48.843118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.177 [2024-12-06 17:02:48.843159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.843168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.843173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.843178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.843188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.177 [2024-12-06 17:02:48.853082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.177 [2024-12-06 17:02:48.853123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.853133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.853138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.853143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.853153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.177 [2024-12-06 17:02:48.863141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.177 [2024-12-06 17:02:48.863179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.177 [2024-12-06 17:02:48.863189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.177 [2024-12-06 17:02:48.863194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.177 [2024-12-06 17:02:48.863198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.177 [2024-12-06 17:02:48.863208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.177 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-06 17:02:48.873041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.437 [2024-12-06 17:02:48.873122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.437 [2024-12-06 17:02:48.873132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.437 [2024-12-06 17:02:48.873137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.437 [2024-12-06 17:02:48.873142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.437 [2024-12-06 17:02:48.873153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-06 17:02:48.883255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.437 [2024-12-06 17:02:48.883318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.437 [2024-12-06 17:02:48.883328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.437 [2024-12-06 17:02:48.883333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.437 [2024-12-06 17:02:48.883338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.437 [2024-12-06 17:02:48.883348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-06 17:02:48.893211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.437 [2024-12-06 17:02:48.893252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.437 [2024-12-06 17:02:48.893262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.437 [2024-12-06 17:02:48.893267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.437 [2024-12-06 17:02:48.893272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.437 [2024-12-06 17:02:48.893282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-06 17:02:48.903246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.437 [2024-12-06 17:02:48.903287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.437 [2024-12-06 17:02:48.903297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.437 [2024-12-06 17:02:48.903303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.437 [2024-12-06 17:02:48.903307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.437 [2024-12-06 17:02:48.903318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-06 17:02:48.913251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.437 [2024-12-06 17:02:48.913291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.437 [2024-12-06 17:02:48.913302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.437 [2024-12-06 17:02:48.913307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.437 [2024-12-06 17:02:48.913312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.437 [2024-12-06 17:02:48.913322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-06 17:02:48.923337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.437 [2024-12-06 17:02:48.923380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.437 [2024-12-06 17:02:48.923389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.437 [2024-12-06 17:02:48.923395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.437 [2024-12-06 17:02:48.923399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.437 [2024-12-06 17:02:48.923410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.437 qpair failed and we were unable to recover it. 00:36:00.437 [2024-12-06 17:02:48.933344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.437 [2024-12-06 17:02:48.933382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.437 [2024-12-06 17:02:48.933392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.437 [2024-12-06 17:02:48.933400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.437 [2024-12-06 17:02:48.933404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:48.933415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:48.943332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:48.943369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:48.943379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:48.943384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:48.943389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:48.943399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:48.953383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:48.953423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:48.953432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:48.953438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:48.953442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:48.953452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:48.963397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:48.963439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:48.963449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:48.963454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:48.963459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:48.963469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:48.973437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:48.973480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:48.973489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:48.973494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:48.973499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:48.973512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:48.983337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:48.983375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:48.983385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:48.983390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:48.983395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:48.983405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:48.993471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:48.993510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:48.993520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:48.993525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:48.993530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:48.993540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:49.003540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:49.003584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:49.003594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:49.003600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:49.003605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:49.003615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:49.013602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:49.013646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:49.013655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:49.013660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:49.013665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:49.013675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:49.023567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:49.023610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:49.023619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.438 [2024-12-06 17:02:49.023624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.438 [2024-12-06 17:02:49.023629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.438 [2024-12-06 17:02:49.023639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.438 qpair failed and we were unable to recover it. 00:36:00.438 [2024-12-06 17:02:49.033618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.438 [2024-12-06 17:02:49.033657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.438 [2024-12-06 17:02:49.033667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.033672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.033676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.033687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.043610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.043654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.043664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.043669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.043673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.043683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.053652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.053723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.053733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.053738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.053742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.053752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.063659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.063698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.063711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.063716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.063720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.063730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.073679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.073719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.073729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.073734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.073739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.073749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.083719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.083765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.083778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.083783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.083788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.083799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.093760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.093796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.093806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.093812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.093816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.093827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.103780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.103822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.103832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.103837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.103842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.103855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.113825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.113867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.113876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.113881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.113886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.113896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.439 [2024-12-06 17:02:49.123855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.439 [2024-12-06 17:02:49.123936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.439 [2024-12-06 17:02:49.123946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.439 [2024-12-06 17:02:49.123951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.439 [2024-12-06 17:02:49.123956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.439 [2024-12-06 17:02:49.123967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.439 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.133915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.133980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.133990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.133995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.134000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.134010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.143894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.143934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.143944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.143949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.143954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.143964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.153920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.153962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.153972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.153977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.153982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.153992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.163961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.164000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.164010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.164015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.164020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.164030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.174020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.174085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.174095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.174104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.174109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.174120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.183975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.184013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.184023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.184028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.184033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.184043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.194035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.194073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.194088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.194093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.194098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.194111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.204062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.204110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.204120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.204125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.204130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.204141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.214066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.214108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.214118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.214123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.214127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.214138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.223964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.224005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.224015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.224020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.224025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.224035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.700 qpair failed and we were unable to recover it. 00:36:00.700 [2024-12-06 17:02:49.234110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.700 [2024-12-06 17:02:49.234153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.700 [2024-12-06 17:02:49.234162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.700 [2024-12-06 17:02:49.234168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.700 [2024-12-06 17:02:49.234175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.700 [2024-12-06 17:02:49.234185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.244171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.244214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.244223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.244228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.244233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.244244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.254188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.254232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.254241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.254246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.254251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.254261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.264202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.264242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.264252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.264257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.264261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.264272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.274243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.274288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.274300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.274305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.274310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.274321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.284245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.284284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.284295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.284300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.284305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.284315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.294177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.294218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.294228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.294233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.294238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.294249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.304341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.304381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.304391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.304396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.304401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.304411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.314338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.314382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.314392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.314397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.314401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.314412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.324374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.324413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.324425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.324430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.324435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.324445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.334376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.334412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.334422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.334427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.334432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.334442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.344433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.344478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.344488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.344493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.344497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.344507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.354448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.354488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.354498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.354503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.354507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.354518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.701 [2024-12-06 17:02:49.364448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.701 [2024-12-06 17:02:49.364490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.701 [2024-12-06 17:02:49.364499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.701 [2024-12-06 17:02:49.364507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.701 [2024-12-06 17:02:49.364512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.701 [2024-12-06 17:02:49.364522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.701 qpair failed and we were unable to recover it. 00:36:00.702 [2024-12-06 17:02:49.374477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.702 [2024-12-06 17:02:49.374517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.702 [2024-12-06 17:02:49.374527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.702 [2024-12-06 17:02:49.374532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.702 [2024-12-06 17:02:49.374537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.702 [2024-12-06 17:02:49.374547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.702 qpair failed and we were unable to recover it. 00:36:00.702 [2024-12-06 17:02:49.384485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.702 [2024-12-06 17:02:49.384522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.702 [2024-12-06 17:02:49.384531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.702 [2024-12-06 17:02:49.384536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.702 [2024-12-06 17:02:49.384541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.702 [2024-12-06 17:02:49.384551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.702 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.394547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.394590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.394600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.394606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.394611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.394621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.404568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.404612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.404621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.404627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.404632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.404642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.414609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.414648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.414658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.414663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.414668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.414678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.424641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.424682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.424691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.424696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.424701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.424711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.434653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.434695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.434704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.434710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.434715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.434725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.444595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.444659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.444668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.444673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.444678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.444688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.454671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.454715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.454724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.454729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.454734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.454744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.464745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.464805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.464815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.464820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.464825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.464835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.474760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.474799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.474809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.474814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.474819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.474829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.484791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.484835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.484846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.484852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.484856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.484867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.494817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.494900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.494910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.963 [2024-12-06 17:02:49.494918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.963 [2024-12-06 17:02:49.494923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.963 [2024-12-06 17:02:49.494934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.963 qpair failed and we were unable to recover it. 00:36:00.963 [2024-12-06 17:02:49.504828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.963 [2024-12-06 17:02:49.504882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.963 [2024-12-06 17:02:49.504892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.504897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.504902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.504912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.514879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.514920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.514930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.514935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.514940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.514951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.524918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.524961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.524971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.524976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.524981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.524991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.534930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.534974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.534983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.534988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.534993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.535006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.544930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.544985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.544995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.545001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.545006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.545016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.554938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.554979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.554989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.554994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.554999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.555009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.565013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.565054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.565063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.565069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.565073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.565084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.575038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.575108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.575119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.575124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.575129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.575139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.585064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.585120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.585129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.585135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.585140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.585150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.595086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.595130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.595140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.595146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.595150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.595161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.605115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.605198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.605207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.605213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.605217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.605228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.615152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.615190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.615199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.615204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.615209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.615220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.625181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.625253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.625266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.964 [2024-12-06 17:02:49.625271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.964 [2024-12-06 17:02:49.625276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.964 [2024-12-06 17:02:49.625286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.964 qpair failed and we were unable to recover it. 00:36:00.964 [2024-12-06 17:02:49.635204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.964 [2024-12-06 17:02:49.635245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.964 [2024-12-06 17:02:49.635255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.965 [2024-12-06 17:02:49.635260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.965 [2024-12-06 17:02:49.635264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.965 [2024-12-06 17:02:49.635274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.965 qpair failed and we were unable to recover it. 00:36:00.965 [2024-12-06 17:02:49.645219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:00.965 [2024-12-06 17:02:49.645284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:00.965 [2024-12-06 17:02:49.645293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:00.965 [2024-12-06 17:02:49.645298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:00.965 [2024-12-06 17:02:49.645303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:00.965 [2024-12-06 17:02:49.645313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:00.965 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.655243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.655310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.655320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.655325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.655330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.655340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.665271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.665314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.665324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.665329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.665334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.665347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.675304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.675345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.675355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.675360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.675365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.675375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.685341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.685432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.685442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.685448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.685452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.685463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.695391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.695464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.695473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.695478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.695483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.695493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.705232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.705284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.705294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.705300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.705304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.705315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.715410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.715469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.715479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.715484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.715489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.715499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.725472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.725541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.725551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.725556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.725561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.725571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.735452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.735494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.225 [2024-12-06 17:02:49.735504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.225 [2024-12-06 17:02:49.735509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.225 [2024-12-06 17:02:49.735514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.225 [2024-12-06 17:02:49.735524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.225 qpair failed and we were unable to recover it. 00:36:01.225 [2024-12-06 17:02:49.745456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.225 [2024-12-06 17:02:49.745495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.745505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.745510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.745515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.745525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.755513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.755556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.755568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.755573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.755578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.755588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.765567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.765608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.765617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.765623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.765628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.765638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.775569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.775615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.775625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.775630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.775635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.775646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.785599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.785636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.785645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.785650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.785655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.785666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.795481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.795524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.795534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.795539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.795546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.795556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.805575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.805618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.805627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.805632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.805637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.805647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.815683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.815725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.815734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.815739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.815744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.815754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.825722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.825766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.825776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.825781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.825786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.825796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.835718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.835773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.835782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.835787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.835792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.835802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.845765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.845851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.845861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.845866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.845871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.845881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.855648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.855685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.855694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.855699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.855704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.855714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.865907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.865974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.865983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.226 [2024-12-06 17:02:49.865988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.226 [2024-12-06 17:02:49.865993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.226 [2024-12-06 17:02:49.866003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.226 qpair failed and we were unable to recover it. 00:36:01.226 [2024-12-06 17:02:49.875789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.226 [2024-12-06 17:02:49.875848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.226 [2024-12-06 17:02:49.875858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.227 [2024-12-06 17:02:49.875863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.227 [2024-12-06 17:02:49.875868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.227 [2024-12-06 17:02:49.875878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.227 qpair failed and we were unable to recover it. 00:36:01.227 [2024-12-06 17:02:49.885942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.227 [2024-12-06 17:02:49.886005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.227 [2024-12-06 17:02:49.886018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.227 [2024-12-06 17:02:49.886023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.227 [2024-12-06 17:02:49.886027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.227 [2024-12-06 17:02:49.886037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.227 qpair failed and we were unable to recover it. 00:36:01.227 [2024-12-06 17:02:49.895921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.227 [2024-12-06 17:02:49.895959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.227 [2024-12-06 17:02:49.895968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.227 [2024-12-06 17:02:49.895973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.227 [2024-12-06 17:02:49.895978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.227 [2024-12-06 17:02:49.895987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.227 qpair failed and we were unable to recover it. 00:36:01.227 [2024-12-06 17:02:49.905919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.227 [2024-12-06 17:02:49.905962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.227 [2024-12-06 17:02:49.905972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.227 [2024-12-06 17:02:49.905976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.227 [2024-12-06 17:02:49.905981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.227 [2024-12-06 17:02:49.905991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.227 qpair failed and we were unable to recover it. 00:36:01.227 [2024-12-06 17:02:49.915948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.227 [2024-12-06 17:02:49.915988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.227 [2024-12-06 17:02:49.915998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.227 [2024-12-06 17:02:49.916002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.227 [2024-12-06 17:02:49.916007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.227 [2024-12-06 17:02:49.916017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.227 qpair failed and we were unable to recover it. 00:36:01.487 [2024-12-06 17:02:49.925980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.487 [2024-12-06 17:02:49.926026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.487 [2024-12-06 17:02:49.926036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.926047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.926051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.926062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:49.935998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:49.936042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:49.936052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.936057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.936061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.936071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:49.946039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:49.946077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:49.946087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.946092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.946096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.946109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:49.956076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:49.956116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:49.956125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.956130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.956134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.956144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:49.966096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:49.966141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:49.966150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.966155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.966159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.966169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:49.976123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:49.976164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:49.976174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.976178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.976183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.976193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:49.986146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:49.986184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:49.986193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.986198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.986202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.986212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:49.996152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:49.996193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:49.996202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:49.996207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:49.996212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:49.996222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:50.006204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:50.006251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:50.006263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:50.006268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:50.006273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:50.006285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:50.016064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:50.016110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:50.016120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:50.016125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:50.016130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:50.016140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:50.026209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:50.026248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:50.026258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:50.026264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:50.026270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:50.026281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:50.036289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:50.036376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:50.036386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:50.036391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:50.036395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:50.036406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:50.046340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:50.046414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.488 [2024-12-06 17:02:50.046424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.488 [2024-12-06 17:02:50.046429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.488 [2024-12-06 17:02:50.046433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.488 [2024-12-06 17:02:50.046444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.488 qpair failed and we were unable to recover it. 00:36:01.488 [2024-12-06 17:02:50.056358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.488 [2024-12-06 17:02:50.056431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.056440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.056448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.056453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.056463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.066352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.066389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.066399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.066404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.066409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.066419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.076395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.076469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.076478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.076483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.076488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.076498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.086378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.086422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.086432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.086437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.086442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.086452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.096427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.096465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.096475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.096480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.096484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.096497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.106442] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.106494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.106504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.106508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.106513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.106522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.116482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.116567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.116576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.116581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.116585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.116595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.126385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.126430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.126440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.126445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.126449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.126459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.136535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.136575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.136584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.136589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.136593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.136603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.146556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.146594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.146604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.146609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.146613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.146623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.156604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.156648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.156657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.156663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.156667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.156677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.166642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.166685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.166694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.166699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.166703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.166713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.489 [2024-12-06 17:02:50.176738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.489 [2024-12-06 17:02:50.176803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.489 [2024-12-06 17:02:50.176813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.489 [2024-12-06 17:02:50.176817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.489 [2024-12-06 17:02:50.176822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.489 [2024-12-06 17:02:50.176832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.489 qpair failed and we were unable to recover it. 00:36:01.751 [2024-12-06 17:02:50.186681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.751 [2024-12-06 17:02:50.186721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.751 [2024-12-06 17:02:50.186733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.751 [2024-12-06 17:02:50.186738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.751 [2024-12-06 17:02:50.186743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.751 [2024-12-06 17:02:50.186753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.751 qpair failed and we were unable to recover it. 00:36:01.751 [2024-12-06 17:02:50.196717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.751 [2024-12-06 17:02:50.196801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.751 [2024-12-06 17:02:50.196811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.751 [2024-12-06 17:02:50.196815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.751 [2024-12-06 17:02:50.196820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.751 [2024-12-06 17:02:50.196829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.751 qpair failed and we were unable to recover it. 00:36:01.751 [2024-12-06 17:02:50.206614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.751 [2024-12-06 17:02:50.206673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.751 [2024-12-06 17:02:50.206682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.751 [2024-12-06 17:02:50.206687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.751 [2024-12-06 17:02:50.206692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.751 [2024-12-06 17:02:50.206702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.751 qpair failed and we were unable to recover it. 00:36:01.751 [2024-12-06 17:02:50.216763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.751 [2024-12-06 17:02:50.216804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.751 [2024-12-06 17:02:50.216813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.751 [2024-12-06 17:02:50.216818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.751 [2024-12-06 17:02:50.216823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.751 [2024-12-06 17:02:50.216832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.226665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.226720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.226730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.226735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.226742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.226753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.236817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.236856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.236865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.236870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.236875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.236885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.246856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.246898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.246908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.246912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.246917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.246926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.256871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.256910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.256919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.256924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.256928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.256938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.266871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.266911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.266920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.266924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.266929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.266939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.276923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.277000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.277010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.277015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.277019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.277029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.286813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.286853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.286863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.286868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.286872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.286882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.296943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.296982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.296991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.296996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.297001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.297011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.306977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.307027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.307037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.307041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.307046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.307056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.317037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.317078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.317090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.317095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.317099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.317112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.327050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.327107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.327116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.327121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.327126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.327136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.337082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.337122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.337131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.337136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.337140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.337150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.347115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.347196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.347205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.347209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.347214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.347224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.357160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.357246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.357255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.357260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.357267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.357277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.367143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.367186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.367195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.367200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.367206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.367216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.377209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.377243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.377252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.377257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.377262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.377273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.387249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.387294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.387304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.387309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.387314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.387324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.397245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.397285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.397294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.397299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.397303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.397313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.407290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.407331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.407341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.407345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.407350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.407360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.417303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.417382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.417391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.417396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.417400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.417410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.752 qpair failed and we were unable to recover it. 00:36:01.752 [2024-12-06 17:02:50.427333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.752 [2024-12-06 17:02:50.427370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.752 [2024-12-06 17:02:50.427379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.752 [2024-12-06 17:02:50.427384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.752 [2024-12-06 17:02:50.427389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.752 [2024-12-06 17:02:50.427398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.753 qpair failed and we were unable to recover it. 00:36:01.753 [2024-12-06 17:02:50.437344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:01.753 [2024-12-06 17:02:50.437385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:01.753 [2024-12-06 17:02:50.437395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:01.753 [2024-12-06 17:02:50.437399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:01.753 [2024-12-06 17:02:50.437404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:01.753 [2024-12-06 17:02:50.437413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:01.753 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.447416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.447461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.447473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.447478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.447482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.447492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.457301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.457343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.457353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.457358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.457362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.457372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.467465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.467536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.467546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.467551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.467555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.467565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.477470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.477553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.477562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.477567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.477572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.477581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.487480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.487519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.487528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.487535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.487540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.487550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.497530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.497566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.497575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.497580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.497584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.497594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.507406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.507443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.507453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.507459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.507463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.507473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.517580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.517623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.517634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.517638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.517643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.517653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.527616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.527655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.527665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.527669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.527674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.527684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.537638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.537696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.537705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.537710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.537714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.537724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.547664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.547699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.547709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.547713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.547718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.547728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.557697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.557738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.557748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.013 [2024-12-06 17:02:50.557753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.013 [2024-12-06 17:02:50.557757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.013 [2024-12-06 17:02:50.557767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.013 qpair failed and we were unable to recover it. 00:36:02.013 [2024-12-06 17:02:50.567724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.013 [2024-12-06 17:02:50.567769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.013 [2024-12-06 17:02:50.567779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.567784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.567789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.567799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.577726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.577779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.577789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.577794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.577798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.577808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.587764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.587837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.587846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.587851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.587856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.587866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.597808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.597848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.597857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.597862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.597866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.597876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.607803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.607843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.607853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.607857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.607862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.607871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.617868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.617906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.617915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.617922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.617927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.617936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.627866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.627945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.627955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.627960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.627964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.627974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.637906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.637949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.637959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.637964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.637968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.637978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.647907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.647948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.647957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.647962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.647966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.647976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.657831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.657899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.657908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.657912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.657917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.657931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.667976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.668025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.668035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.668039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.668044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.668053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.677972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.678010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.678019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.678024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.678029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.678038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.688040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.014 [2024-12-06 17:02:50.688080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.014 [2024-12-06 17:02:50.688089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.014 [2024-12-06 17:02:50.688094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.014 [2024-12-06 17:02:50.688099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.014 [2024-12-06 17:02:50.688112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.014 qpair failed and we were unable to recover it. 00:36:02.014 [2024-12-06 17:02:50.698089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.015 [2024-12-06 17:02:50.698129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.015 [2024-12-06 17:02:50.698138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.015 [2024-12-06 17:02:50.698143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.015 [2024-12-06 17:02:50.698147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.015 [2024-12-06 17:02:50.698157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.015 qpair failed and we were unable to recover it. 00:36:02.274 [2024-12-06 17:02:50.707945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.274 [2024-12-06 17:02:50.707986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.274 [2024-12-06 17:02:50.707995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.274 [2024-12-06 17:02:50.708000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.274 [2024-12-06 17:02:50.708005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.274 [2024-12-06 17:02:50.708014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.274 qpair failed and we were unable to recover it. 00:36:02.274 [2024-12-06 17:02:50.718089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.274 [2024-12-06 17:02:50.718133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.274 [2024-12-06 17:02:50.718142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.274 [2024-12-06 17:02:50.718147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.274 [2024-12-06 17:02:50.718152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.274 [2024-12-06 17:02:50.718162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.274 qpair failed and we were unable to recover it. 00:36:02.274 [2024-12-06 17:02:50.728139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.274 [2024-12-06 17:02:50.728184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.274 [2024-12-06 17:02:50.728193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.274 [2024-12-06 17:02:50.728198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.274 [2024-12-06 17:02:50.728202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.274 [2024-12-06 17:02:50.728212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.274 qpair failed and we were unable to recover it. 00:36:02.274 [2024-12-06 17:02:50.738165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.274 [2024-12-06 17:02:50.738205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.274 [2024-12-06 17:02:50.738215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.274 [2024-12-06 17:02:50.738220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.274 [2024-12-06 17:02:50.738224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.274 [2024-12-06 17:02:50.738234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.274 qpair failed and we were unable to recover it. 00:36:02.274 [2024-12-06 17:02:50.748195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.748232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.748244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.748249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.748254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.275 [2024-12-06 17:02:50.748264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 [2024-12-06 17:02:50.758160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.758199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.758209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.758214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.758218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.275 [2024-12-06 17:02:50.758228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 [2024-12-06 17:02:50.768130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.768172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.768181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.768186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.768190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.275 [2024-12-06 17:02:50.768200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 [2024-12-06 17:02:50.778268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.778310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.778319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.778324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.778328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89e4000b90 00:36:02.275 [2024-12-06 17:02:50.778338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Read completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 Write completed with error (sct=0, sc=8) 00:36:02.275 starting I/O failed 00:36:02.275 [2024-12-06 17:02:50.779254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:02.275 [2024-12-06 17:02:50.788309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.788351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.788368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.788374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.788379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89ec000b90 00:36:02.275 [2024-12-06 17:02:50.788392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 [2024-12-06 17:02:50.798322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.798374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.798386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.798391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.798396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f89ec000b90 00:36:02.275 [2024-12-06 17:02:50.798407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 [2024-12-06 17:02:50.808257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.808323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.808351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.808362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.808371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2441310 00:36:02.275 [2024-12-06 17:02:50.808395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 [2024-12-06 17:02:50.818423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:02.275 [2024-12-06 17:02:50.818520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:02.275 [2024-12-06 17:02:50.818535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:02.275 [2024-12-06 17:02:50.818542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:02.275 [2024-12-06 17:02:50.818549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2441310 00:36:02.275 [2024-12-06 17:02:50.818564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:02.275 qpair failed and we were unable to recover it. 00:36:02.275 [2024-12-06 17:02:50.818749] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:02.275 A controller has encountered a failure and is being reset. 00:36:02.276 [2024-12-06 17:02:50.818868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446e30 (9): Bad file descriptor 00:36:02.276 Controller properly reset. 00:36:02.535 Initializing NVMe Controllers 00:36:02.535 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:02.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:02.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:02.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:02.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:02.535 Initialization complete. Launching workers. 00:36:02.535 Starting thread on core 1 00:36:02.535 Starting thread on core 2 00:36:02.535 Starting thread on core 3 00:36:02.535 Starting thread on core 0 00:36:02.535 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:02.535 00:36:02.535 real 0m11.472s 00:36:02.535 user 0m21.676s 00:36:02.535 sys 0m3.727s 00:36:02.535 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.535 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:02.535 ************************************ 00:36:02.535 END TEST nvmf_target_disconnect_tc2 00:36:02.535 ************************************ 00:36:02.535 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:02.535 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:02.535 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:02.536 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:02.536 17:02:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.536 rmmod nvme_tcp 00:36:02.536 rmmod nvme_fabrics 00:36:02.536 rmmod nvme_keyring 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2516533 ']' 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2516533 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2516533 ']' 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2516533 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2516533 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2516533' 00:36:02.536 killing process with pid 2516533 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2516533 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2516533 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.536 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.795 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.795 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.795 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.795 17:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.702 17:02:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.702 00:36:04.702 real 0m19.353s 00:36:04.702 user 0m49.554s 00:36:04.702 sys 0m8.044s 00:36:04.702 17:02:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.702 17:02:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:04.702 ************************************ 00:36:04.702 END TEST nvmf_target_disconnect 00:36:04.702 ************************************ 00:36:04.702 17:02:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:04.702 00:36:04.702 real 6m47.897s 00:36:04.702 user 15m57.994s 00:36:04.702 sys 1m49.071s 00:36:04.702 17:02:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.702 17:02:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:04.702 ************************************ 00:36:04.702 END TEST nvmf_host 00:36:04.702 ************************************ 00:36:04.702 17:02:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:04.702 17:02:53 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:04.702 17:02:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:04.702 17:02:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:04.702 17:02:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.702 17:02:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:04.702 ************************************ 00:36:04.702 START TEST nvmf_target_core_interrupt_mode 00:36:04.702 ************************************ 00:36:04.702 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:04.964 * Looking for test storage... 00:36:04.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.964 --rc genhtml_branch_coverage=1 00:36:04.964 --rc genhtml_function_coverage=1 00:36:04.964 --rc genhtml_legend=1 00:36:04.964 --rc geninfo_all_blocks=1 00:36:04.964 --rc geninfo_unexecuted_blocks=1 00:36:04.964 00:36:04.964 ' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.964 --rc genhtml_branch_coverage=1 00:36:04.964 --rc genhtml_function_coverage=1 00:36:04.964 --rc genhtml_legend=1 00:36:04.964 --rc geninfo_all_blocks=1 00:36:04.964 --rc geninfo_unexecuted_blocks=1 00:36:04.964 00:36:04.964 ' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.964 --rc genhtml_branch_coverage=1 00:36:04.964 --rc genhtml_function_coverage=1 00:36:04.964 --rc genhtml_legend=1 00:36:04.964 --rc geninfo_all_blocks=1 00:36:04.964 --rc geninfo_unexecuted_blocks=1 00:36:04.964 00:36:04.964 ' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:04.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.964 --rc genhtml_branch_coverage=1 00:36:04.964 --rc genhtml_function_coverage=1 00:36:04.964 --rc genhtml_legend=1 00:36:04.964 --rc geninfo_all_blocks=1 00:36:04.964 --rc geninfo_unexecuted_blocks=1 00:36:04.964 00:36:04.964 ' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.964 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:04.965 ************************************ 00:36:04.965 START TEST nvmf_abort 00:36:04.965 ************************************ 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:04.965 * Looking for test storage... 00:36:04.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.965 --rc genhtml_branch_coverage=1 00:36:04.965 --rc genhtml_function_coverage=1 00:36:04.965 --rc genhtml_legend=1 00:36:04.965 --rc geninfo_all_blocks=1 00:36:04.965 --rc geninfo_unexecuted_blocks=1 00:36:04.965 00:36:04.965 ' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.965 --rc genhtml_branch_coverage=1 00:36:04.965 --rc genhtml_function_coverage=1 00:36:04.965 --rc genhtml_legend=1 00:36:04.965 --rc geninfo_all_blocks=1 00:36:04.965 --rc geninfo_unexecuted_blocks=1 00:36:04.965 00:36:04.965 ' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.965 --rc genhtml_branch_coverage=1 00:36:04.965 --rc genhtml_function_coverage=1 00:36:04.965 --rc genhtml_legend=1 00:36:04.965 --rc geninfo_all_blocks=1 00:36:04.965 --rc geninfo_unexecuted_blocks=1 00:36:04.965 00:36:04.965 ' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:04.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.965 --rc genhtml_branch_coverage=1 00:36:04.965 --rc genhtml_function_coverage=1 00:36:04.965 --rc genhtml_legend=1 00:36:04.965 --rc geninfo_all_blocks=1 00:36:04.965 --rc geninfo_unexecuted_blocks=1 00:36:04.965 00:36:04.965 ' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.965 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:04.966 17:02:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:10.242 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:10.242 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:10.242 Found net devices under 0000:31:00.0: cvl_0_0 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:10.242 Found net devices under 0000:31:00.1: cvl_0_1 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:10.242 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:10.243 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:10.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:10.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:36:10.520 00:36:10.520 --- 10.0.0.2 ping statistics --- 00:36:10.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.520 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:10.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:10.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:36:10.520 00:36:10.520 --- 10.0.0.1 ping statistics --- 00:36:10.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.520 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2522343 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2522343 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2522343 ']' 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:10.520 17:02:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:10.520 [2024-12-06 17:02:59.017276] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:10.520 [2024-12-06 17:02:59.018318] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:36:10.520 [2024-12-06 17:02:59.018357] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:10.520 [2024-12-06 17:02:59.102681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:10.520 [2024-12-06 17:02:59.120350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:10.520 [2024-12-06 17:02:59.120380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:10.520 [2024-12-06 17:02:59.120388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:10.520 [2024-12-06 17:02:59.120395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:10.520 [2024-12-06 17:02:59.120401] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:10.520 [2024-12-06 17:02:59.121774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:10.520 [2024-12-06 17:02:59.121923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.520 [2024-12-06 17:02:59.121926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:10.520 [2024-12-06 17:02:59.171438] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:10.520 [2024-12-06 17:02:59.172387] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:10.520 [2024-12-06 17:02:59.172797] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:10.520 [2024-12-06 17:02:59.172843] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.457 [2024-12-06 17:02:59.822746] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.457 Malloc0 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.457 Delay0 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.457 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.458 [2024-12-06 17:02:59.894684] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.458 17:02:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:11.458 [2024-12-06 17:02:59.998936] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:13.993 Initializing NVMe Controllers 00:36:13.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:13.993 controller IO queue size 128 less than required 00:36:13.993 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:13.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:13.993 Initialization complete. Launching workers. 00:36:13.993 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28986 00:36:13.993 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29047, failed to submit 66 00:36:13.993 success 28986, unsuccessful 61, failed 0 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:13.993 rmmod nvme_tcp 00:36:13.993 rmmod nvme_fabrics 00:36:13.993 rmmod nvme_keyring 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2522343 ']' 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2522343 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2522343 ']' 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2522343 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522343 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522343' 00:36:13.993 killing process with pid 2522343 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2522343 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2522343 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:13.993 17:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.898 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.898 00:36:15.898 real 0m10.893s 00:36:15.898 user 0m9.895s 00:36:15.898 sys 0m5.199s 00:36:15.898 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.898 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:15.898 ************************************ 00:36:15.898 END TEST nvmf_abort 00:36:15.898 ************************************ 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:15.899 ************************************ 00:36:15.899 START TEST nvmf_ns_hotplug_stress 00:36:15.899 ************************************ 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:15.899 * Looking for test storage... 00:36:15.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:15.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.899 --rc genhtml_branch_coverage=1 00:36:15.899 --rc genhtml_function_coverage=1 00:36:15.899 --rc genhtml_legend=1 00:36:15.899 --rc geninfo_all_blocks=1 00:36:15.899 --rc geninfo_unexecuted_blocks=1 00:36:15.899 00:36:15.899 ' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:15.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.899 --rc genhtml_branch_coverage=1 00:36:15.899 --rc genhtml_function_coverage=1 00:36:15.899 --rc genhtml_legend=1 00:36:15.899 --rc geninfo_all_blocks=1 00:36:15.899 --rc geninfo_unexecuted_blocks=1 00:36:15.899 00:36:15.899 ' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:15.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.899 --rc genhtml_branch_coverage=1 00:36:15.899 --rc genhtml_function_coverage=1 00:36:15.899 --rc genhtml_legend=1 00:36:15.899 --rc geninfo_all_blocks=1 00:36:15.899 --rc geninfo_unexecuted_blocks=1 00:36:15.899 00:36:15.899 ' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:15.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.899 --rc genhtml_branch_coverage=1 00:36:15.899 --rc genhtml_function_coverage=1 00:36:15.899 --rc genhtml_legend=1 00:36:15.899 --rc geninfo_all_blocks=1 00:36:15.899 --rc geninfo_unexecuted_blocks=1 00:36:15.899 00:36:15.899 ' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.899 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.900 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:16.160 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:16.160 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:16.160 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:16.160 17:03:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:21.454 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:21.455 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:21.455 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:21.455 Found net devices under 0000:31:00.0: cvl_0_0 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:21.455 Found net devices under 0000:31:00.1: cvl_0_1 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.455 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.456 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.456 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.456 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.456 17:03:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:21.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:36:21.456 00:36:21.456 --- 10.0.0.2 ping statistics --- 00:36:21.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.456 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:36:21.456 00:36:21.456 --- 10.0.0.1 ping statistics --- 00:36:21.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.456 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2527950 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2527950 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2527950 ']' 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.456 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:21.456 [2024-12-06 17:03:10.088465] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:21.456 [2024-12-06 17:03:10.089627] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:36:21.456 [2024-12-06 17:03:10.089683] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.716 [2024-12-06 17:03:10.182930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:21.716 [2024-12-06 17:03:10.210611] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:21.716 [2024-12-06 17:03:10.210664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:21.716 [2024-12-06 17:03:10.210672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:21.716 [2024-12-06 17:03:10.210679] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:21.716 [2024-12-06 17:03:10.210686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:21.716 [2024-12-06 17:03:10.212564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:21.716 [2024-12-06 17:03:10.212725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.716 [2024-12-06 17:03:10.212726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:21.716 [2024-12-06 17:03:10.277871] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:21.716 [2024-12-06 17:03:10.278871] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:21.716 [2024-12-06 17:03:10.278881] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:21.716 [2024-12-06 17:03:10.278958] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:22.286 17:03:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:22.547 [2024-12-06 17:03:11.081491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:22.547 17:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:22.806 17:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:22.806 [2024-12-06 17:03:11.450280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:22.806 17:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:23.067 17:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:23.327 Malloc0 00:36:23.327 17:03:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:23.327 Delay0 00:36:23.587 17:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:23.587 17:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:23.846 NULL1 00:36:23.846 17:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:23.847 17:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:23.847 17:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2528567 00:36:23.847 17:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:23.847 17:03:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:25.228 Read completed with error (sct=0, sc=11) 00:36:25.228 17:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:25.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:25.228 17:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:25.228 17:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:25.228 true 00:36:25.487 17:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:25.487 17:03:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:26.422 17:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:26.422 17:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:26.422 17:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:26.422 true 00:36:26.422 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:26.422 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:26.696 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.035 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:27.035 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:27.035 true 00:36:27.035 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:27.035 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.301 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.301 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:27.301 17:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:27.561 true 00:36:27.561 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:27.561 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:27.561 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:27.820 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:27.820 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:28.079 true 00:36:28.080 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:28.080 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:28.080 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:28.339 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:28.339 17:03:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:28.339 true 00:36:28.339 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:28.339 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.278 17:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:29.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:29.538 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:29.538 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:36:29.538 true 00:36:29.538 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:29.538 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:29.797 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.056 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:36:30.056 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:36:30.056 true 00:36:30.056 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:30.056 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.314 17:03:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.573 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:36:30.573 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:36:30.573 true 00:36:30.573 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:30.573 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:30.832 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:30.832 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:36:30.832 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:36:31.090 true 00:36:31.090 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:31.090 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.349 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.349 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:36:31.349 17:03:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:36:31.607 true 00:36:31.607 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:31.608 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:31.608 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:31.866 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:36:31.866 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:36:32.125 true 00:36:32.125 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:32.125 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:32.125 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:32.384 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:36:32.384 17:03:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:36:32.384 true 00:36:32.384 17:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:32.384 17:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.319 17:03:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:33.578 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:36:33.578 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:36:33.578 true 00:36:33.838 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:33.838 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:33.838 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.098 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:36:34.098 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:36:34.098 true 00:36:34.098 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:34.098 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.358 17:03:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.618 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:36:34.618 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:36:34.618 true 00:36:34.618 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:34.618 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:34.878 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:34.878 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:36:34.878 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:36:35.137 true 00:36:35.137 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:35.137 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:35.397 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:35.397 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:36:35.397 17:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:36:35.657 true 00:36:35.657 17:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:35.657 17:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:36.595 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:36.855 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:36:36.855 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:36:36.855 true 00:36:36.855 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:36.855 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.114 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.115 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:36:37.115 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:36:37.373 true 00:36:37.373 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:37.373 17:03:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.632 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:37.632 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:36:37.632 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:36:37.891 true 00:36:37.891 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:37.891 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:37.891 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.151 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:36:38.151 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:36:38.411 true 00:36:38.411 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:38.411 17:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.411 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:38.671 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:36:38.671 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:36:38.671 true 00:36:38.930 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:38.930 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:38.930 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.191 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:36:39.191 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:36:39.191 true 00:36:39.191 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:39.191 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:39.451 17:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:39.712 17:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:36:39.712 17:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:36:39.712 true 00:36:39.712 17:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:39.712 17:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:40.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:40.651 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:40.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:40.912 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:36:40.912 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:36:41.172 true 00:36:41.172 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:41.172 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.172 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.432 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:36:41.432 17:03:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:36:41.432 true 00:36:41.432 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:41.432 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:41.691 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:41.953 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:36:41.953 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:36:41.953 true 00:36:41.953 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:41.953 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.213 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.213 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:36:42.213 17:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:36:42.473 true 00:36:42.473 17:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:42.473 17:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:42.733 17:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:42.733 17:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:36:42.733 17:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:36:42.993 true 00:36:42.993 17:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:42.994 17:03:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:43.934 17:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:43.934 17:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:36:43.934 17:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:36:44.194 true 00:36:44.194 17:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:44.194 17:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.454 17:03:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.454 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:36:44.454 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:36:44.714 true 00:36:44.714 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:44.714 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:44.714 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:44.975 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:36:44.975 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:36:44.975 true 00:36:45.234 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:45.234 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.234 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:45.494 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:36:45.494 17:03:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:36:45.494 true 00:36:45.494 17:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:45.495 17:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:45.755 17:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:46.015 17:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:36:46.015 17:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:36:46.015 true 00:36:46.015 17:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:46.015 17:03:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:46.970 17:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.228 17:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:36:47.228 17:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:36:47.228 true 00:36:47.228 17:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:47.228 17:03:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:47.488 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:47.488 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:36:47.488 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:36:47.748 true 00:36:47.748 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:47.748 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.008 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.008 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:36:48.008 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:36:48.267 true 00:36:48.267 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:48.267 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.526 17:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:48.526 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:36:48.526 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:36:48.786 true 00:36:48.786 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:48.786 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:48.786 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:49.046 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:36:49.046 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:36:49.046 true 00:36:49.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:49.306 17:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.245 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.245 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:36:50.245 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:36:50.505 true 00:36:50.505 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:50.505 17:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:50.505 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:50.765 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:36:50.766 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:36:50.766 true 00:36:50.766 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:50.766 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.026 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.286 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:36:51.286 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:36:51.287 true 00:36:51.287 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:51.287 17:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:51.547 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:51.807 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:36:51.807 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:36:51.807 true 00:36:51.807 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:51.807 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:52.066 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.066 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:36:52.066 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:36:52.326 true 00:36:52.326 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:52.326 17:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:53.265 17:03:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.525 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:36:53.525 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:36:53.525 true 00:36:53.525 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:53.525 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.785 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.045 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:36:54.045 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:36:54.045 true 00:36:54.045 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:54.045 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.045 Initializing NVMe Controllers 00:36:54.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:54.045 Controller IO queue size 128, less than required. 00:36:54.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:54.045 Controller IO queue size 128, less than required. 00:36:54.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:54.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:36:54.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:36:54.045 Initialization complete. Launching workers. 00:36:54.045 ======================================================== 00:36:54.045 Latency(us) 00:36:54.045 Device Information : IOPS MiB/s Average min max 00:36:54.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 394.19 0.19 116943.48 2530.59 1021349.64 00:36:54.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10766.87 5.26 11888.04 1136.43 374657.64 00:36:54.045 ======================================================== 00:36:54.045 Total : 11161.06 5.45 15598.44 1136.43 1021349.64 00:36:54.045 00:36:54.305 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.305 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:36:54.305 17:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:36:54.566 true 00:36:54.566 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2528567 00:36:54.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2528567) - No such process 00:36:54.566 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2528567 00:36:54.566 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.825 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:54.825 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:54.825 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:54.825 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:54.825 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:54.825 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:55.086 null0 00:36:55.086 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:55.086 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:55.086 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:55.086 null1 00:36:55.086 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:55.086 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:55.086 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:55.345 null2 00:36:55.345 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:55.345 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:55.345 17:03:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:55.605 null3 00:36:55.605 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:55.605 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:55.605 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:55.605 null4 00:36:55.605 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:55.605 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:55.605 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:55.863 null5 00:36:55.863 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:55.863 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:55.863 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:55.863 null6 00:36:55.863 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:55.863 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:55.863 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:56.122 null7 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.122 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2535656 2535657 2535659 2535661 2535662 2535665 2535668 2535670 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.123 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:56.381 17:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.381 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.640 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.899 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:56.900 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.159 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:57.418 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.418 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.418 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:57.418 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:57.419 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:57.419 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:57.419 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.419 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.419 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.419 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.419 17:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:57.419 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.725 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:57.984 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:57.985 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.243 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:58.503 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.503 17:03:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:58.503 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:58.763 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.024 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.284 17:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:59.542 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:59.542 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.542 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.542 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.542 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.542 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.542 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.543 rmmod nvme_tcp 00:36:59.543 rmmod nvme_fabrics 00:36:59.543 rmmod nvme_keyring 00:36:59.543 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2527950 ']' 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2527950 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2527950 ']' 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2527950 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2527950 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2527950' 00:36:59.801 killing process with pid 2527950 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2527950 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2527950 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:59.801 17:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:02.337 00:37:02.337 real 0m45.997s 00:37:02.337 user 2m54.885s 00:37:02.337 sys 0m17.606s 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:02.337 ************************************ 00:37:02.337 END TEST nvmf_ns_hotplug_stress 00:37:02.337 ************************************ 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:02.337 ************************************ 00:37:02.337 START TEST nvmf_delete_subsystem 00:37:02.337 ************************************ 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:02.337 * Looking for test storage... 00:37:02.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.337 --rc genhtml_branch_coverage=1 00:37:02.337 --rc genhtml_function_coverage=1 00:37:02.337 --rc genhtml_legend=1 00:37:02.337 --rc geninfo_all_blocks=1 00:37:02.337 --rc geninfo_unexecuted_blocks=1 00:37:02.337 00:37:02.337 ' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.337 --rc genhtml_branch_coverage=1 00:37:02.337 --rc genhtml_function_coverage=1 00:37:02.337 --rc genhtml_legend=1 00:37:02.337 --rc geninfo_all_blocks=1 00:37:02.337 --rc geninfo_unexecuted_blocks=1 00:37:02.337 00:37:02.337 ' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.337 --rc genhtml_branch_coverage=1 00:37:02.337 --rc genhtml_function_coverage=1 00:37:02.337 --rc genhtml_legend=1 00:37:02.337 --rc geninfo_all_blocks=1 00:37:02.337 --rc geninfo_unexecuted_blocks=1 00:37:02.337 00:37:02.337 ' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:02.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.337 --rc genhtml_branch_coverage=1 00:37:02.337 --rc genhtml_function_coverage=1 00:37:02.337 --rc genhtml_legend=1 00:37:02.337 --rc geninfo_all_blocks=1 00:37:02.337 --rc geninfo_unexecuted_blocks=1 00:37:02.337 00:37:02.337 ' 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.337 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:02.338 17:03:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:07.632 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:07.632 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:07.632 Found net devices under 0000:31:00.0: cvl_0_0 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:07.632 Found net devices under 0000:31:00.1: cvl_0_1 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:07.632 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:07.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:07.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.591 ms 00:37:07.633 00:37:07.633 --- 10.0.0.2 ping statistics --- 00:37:07.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.633 rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:07.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:07.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:37:07.633 00:37:07.633 --- 10.0.0.1 ping statistics --- 00:37:07.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.633 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2540836 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2540836 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2540836 ']' 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:07.633 17:03:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:07.633 [2024-12-06 17:03:55.846411] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:07.633 [2024-12-06 17:03:55.847400] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:37:07.633 [2024-12-06 17:03:55.847436] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.633 [2024-12-06 17:03:55.932249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:07.633 [2024-12-06 17:03:55.954799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.633 [2024-12-06 17:03:55.954838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.633 [2024-12-06 17:03:55.954847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.633 [2024-12-06 17:03:55.954854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.633 [2024-12-06 17:03:55.954860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.633 [2024-12-06 17:03:55.956366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.633 [2024-12-06 17:03:55.956461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.633 [2024-12-06 17:03:56.014199] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:07.633 [2024-12-06 17:03:56.014731] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:07.633 [2024-12-06 17:03:56.014757] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.203 [2024-12-06 17:03:56.649343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.203 [2024-12-06 17:03:56.669635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.203 NULL1 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.203 Delay0 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2540886 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:08.203 17:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:08.203 [2024-12-06 17:03:56.742259] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:10.110 17:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:10.110 17:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.110 17:03:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Write completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 Read completed with error (sct=0, sc=8) 00:37:10.370 starting I/O failed: -6 00:37:10.371 [2024-12-06 17:03:58.836248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf77920 is same with the state(6) to be set 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 starting I/O failed: -6 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Read completed with error (sct=0, sc=8) 00:37:10.371 Write completed with error (sct=0, sc=8) 00:37:10.372 Read completed with error (sct=0, sc=8) 00:37:10.372 Write completed with error (sct=0, sc=8) 00:37:10.372 Read completed with error (sct=0, sc=8) 00:37:11.311 [2024-12-06 17:03:59.799315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20260 is same with the state(6) to be set 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Write completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Write completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Write completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Write completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Write completed with error (sct=0, sc=8) 00:37:11.311 Write completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.311 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 [2024-12-06 17:03:59.838467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a6c00d7c0 is same with the state(6) to be set 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 [2024-12-06 17:03:59.838611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a6c000c40 is same with the state(6) to be set 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 [2024-12-06 17:03:59.838676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf775f0 is same with the state(6) to be set 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Write completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 Read completed with error (sct=0, sc=8) 00:37:11.312 [2024-12-06 17:03:59.838814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a6c00d020 is same with the state(6) to be set 00:37:11.312 Initializing NVMe Controllers 00:37:11.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:11.312 Controller IO queue size 128, less than required. 00:37:11.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:11.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:11.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:11.312 Initialization complete. Launching workers. 00:37:11.312 ======================================================== 00:37:11.312 Latency(us) 00:37:11.312 Device Information : IOPS MiB/s Average min max 00:37:11.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 151.53 0.07 890440.75 244.29 1009353.63 00:37:11.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 195.75 0.10 943277.59 441.71 1011412.05 00:37:11.312 ======================================================== 00:37:11.312 Total : 347.28 0.17 920222.89 244.29 1011412.05 00:37:11.312 00:37:11.312 [2024-12-06 17:03:59.839470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20260 (9): Bad file descriptor 00:37:11.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:11.312 17:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.312 17:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:11.312 17:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2540886 00:37:11.312 17:03:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2540886 00:37:11.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2540886) - No such process 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2540886 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2540886 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2540886 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.882 [2024-12-06 17:04:00.361576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2541857 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:11.882 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:11.882 [2024-12-06 17:04:00.411237] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:12.450 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:12.450 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:12.450 17:04:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:12.709 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:12.709 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:12.709 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:13.278 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:13.278 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:13.278 17:04:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:13.860 17:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:13.860 17:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:13.860 17:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:14.549 17:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:14.549 17:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:14.549 17:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:14.835 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:14.835 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:14.835 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:15.094 Initializing NVMe Controllers 00:37:15.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:15.094 Controller IO queue size 128, less than required. 00:37:15.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:15.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:15.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:15.094 Initialization complete. Launching workers. 00:37:15.094 ======================================================== 00:37:15.094 Latency(us) 00:37:15.094 Device Information : IOPS MiB/s Average min max 00:37:15.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003790.32 1000273.16 1010487.02 00:37:15.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002758.47 1000097.22 1010399.04 00:37:15.094 ======================================================== 00:37:15.094 Total : 256.00 0.12 1003274.39 1000097.22 1010487.02 00:37:15.094 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2541857 00:37:15.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2541857) - No such process 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2541857 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:15.355 rmmod nvme_tcp 00:37:15.355 rmmod nvme_fabrics 00:37:15.355 rmmod nvme_keyring 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2540836 ']' 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2540836 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2540836 ']' 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2540836 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:15.355 17:04:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540836 00:37:15.355 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:15.355 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:15.355 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540836' 00:37:15.355 killing process with pid 2540836 00:37:15.355 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2540836 00:37:15.355 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2540836 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:15.616 17:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:17.519 00:37:17.519 real 0m15.668s 00:37:17.519 user 0m25.463s 00:37:17.519 sys 0m5.433s 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:17.519 ************************************ 00:37:17.519 END TEST nvmf_delete_subsystem 00:37:17.519 ************************************ 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:17.519 ************************************ 00:37:17.519 START TEST nvmf_host_management 00:37:17.519 ************************************ 00:37:17.519 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:17.779 * Looking for test storage... 00:37:17.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:17.779 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.780 --rc genhtml_branch_coverage=1 00:37:17.780 --rc genhtml_function_coverage=1 00:37:17.780 --rc genhtml_legend=1 00:37:17.780 --rc geninfo_all_blocks=1 00:37:17.780 --rc geninfo_unexecuted_blocks=1 00:37:17.780 00:37:17.780 ' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.780 --rc genhtml_branch_coverage=1 00:37:17.780 --rc genhtml_function_coverage=1 00:37:17.780 --rc genhtml_legend=1 00:37:17.780 --rc geninfo_all_blocks=1 00:37:17.780 --rc geninfo_unexecuted_blocks=1 00:37:17.780 00:37:17.780 ' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.780 --rc genhtml_branch_coverage=1 00:37:17.780 --rc genhtml_function_coverage=1 00:37:17.780 --rc genhtml_legend=1 00:37:17.780 --rc geninfo_all_blocks=1 00:37:17.780 --rc geninfo_unexecuted_blocks=1 00:37:17.780 00:37:17.780 ' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:17.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:17.780 --rc genhtml_branch_coverage=1 00:37:17.780 --rc genhtml_function_coverage=1 00:37:17.780 --rc genhtml_legend=1 00:37:17.780 --rc geninfo_all_blocks=1 00:37:17.780 --rc geninfo_unexecuted_blocks=1 00:37:17.780 00:37:17.780 ' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:17.780 17:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:23.055 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:23.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:23.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:23.056 Found net devices under 0000:31:00.0: cvl_0_0 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:23.056 Found net devices under 0000:31:00.1: cvl_0_1 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:23.056 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:23.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:37:23.056 00:37:23.056 --- 10.0.0.2 ping statistics --- 00:37:23.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.056 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:23.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:37:23.057 00:37:23.057 --- 10.0.0.1 ping statistics --- 00:37:23.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.057 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2546873 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2546873 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2546873 ']' 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.057 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.057 [2024-12-06 17:04:11.708943] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:23.057 [2024-12-06 17:04:11.709915] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:37:23.057 [2024-12-06 17:04:11.709950] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.315 [2024-12-06 17:04:11.797500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:23.315 [2024-12-06 17:04:11.820031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.315 [2024-12-06 17:04:11.820068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.315 [2024-12-06 17:04:11.820081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.315 [2024-12-06 17:04:11.820089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.315 [2024-12-06 17:04:11.820096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.315 [2024-12-06 17:04:11.821944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:23.315 [2024-12-06 17:04:11.822093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:23.315 [2024-12-06 17:04:11.822221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:23.315 [2024-12-06 17:04:11.822223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.315 [2024-12-06 17:04:11.873092] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:23.315 [2024-12-06 17:04:11.874104] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:23.316 [2024-12-06 17:04:11.874431] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:23.316 [2024-12-06 17:04:11.874711] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:23.316 [2024-12-06 17:04:11.874719] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.316 [2024-12-06 17:04:11.919061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.316 17:04:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.316 Malloc0 00:37:23.316 [2024-12-06 17:04:11.991221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.316 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.316 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:23.316 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.316 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2546915 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2546915 /var/tmp/bdevperf.sock 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2546915 ']' 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:23.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:23.574 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:23.575 { 00:37:23.575 "params": { 00:37:23.575 "name": "Nvme$subsystem", 00:37:23.575 "trtype": "$TEST_TRANSPORT", 00:37:23.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:23.575 "adrfam": "ipv4", 00:37:23.575 "trsvcid": "$NVMF_PORT", 00:37:23.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:23.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:23.575 "hdgst": ${hdgst:-false}, 00:37:23.575 "ddgst": ${ddgst:-false} 00:37:23.575 }, 00:37:23.575 "method": "bdev_nvme_attach_controller" 00:37:23.575 } 00:37:23.575 EOF 00:37:23.575 )") 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:23.575 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:23.575 "params": { 00:37:23.575 "name": "Nvme0", 00:37:23.575 "trtype": "tcp", 00:37:23.575 "traddr": "10.0.0.2", 00:37:23.575 "adrfam": "ipv4", 00:37:23.575 "trsvcid": "4420", 00:37:23.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:23.575 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:23.575 "hdgst": false, 00:37:23.575 "ddgst": false 00:37:23.575 }, 00:37:23.575 "method": "bdev_nvme_attach_controller" 00:37:23.575 }' 00:37:23.575 [2024-12-06 17:04:12.061123] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:37:23.575 [2024-12-06 17:04:12.061176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546915 ] 00:37:23.575 [2024-12-06 17:04:12.137763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.575 [2024-12-06 17:04:12.156076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.834 Running I/O for 10 seconds... 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:23.834 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:23.835 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:24.096 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:24.096 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:24.096 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:24.096 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.096 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:24.096 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.096 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=466 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 466 -ge 100 ']' 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.097 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.097 [2024-12-06 17:04:12.719050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc26c0 is same with the state(6) to be set 00:37:24.097 [2024-12-06 17:04:12.719087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc26c0 is same with the state(6) to be set 00:37:24.097 [2024-12-06 17:04:12.719411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.097 [2024-12-06 17:04:12.719875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.097 [2024-12-06 17:04:12.719883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.719892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.719900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.719909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.719916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.719926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.719933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.719942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.719950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.719959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.719967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.719976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.719983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.719993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.098 [2024-12-06 17:04:12.720409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.098 [2024-12-06 17:04:12.720416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.099 [2024-12-06 17:04:12.720433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.099 [2024-12-06 17:04:12.720452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.099 [2024-12-06 17:04:12.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.099 [2024-12-06 17:04:12.720485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.099 [2024-12-06 17:04:12.720502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.099 [2024-12-06 17:04:12.720518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:24.099 [2024-12-06 17:04:12.720535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b49e0 is same with the state(6) to be set 00:37:24.099 [2024-12-06 17:04:12.720621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.099 [2024-12-06 17:04:12.720632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.099 [2024-12-06 17:04:12.720650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.099 [2024-12-06 17:04:12.720665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:37:24.099 [2024-12-06 17:04:12.720681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.720688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b7710 is same with the state(6) to be set 00:37:24.099 [2024-12-06 17:04:12.721881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:24.099 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.099 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:24.099 task offset: 70400 on job bdev=Nvme0n1 fails 00:37:24.099 00:37:24.099 Latency(us) 00:37:24.099 [2024-12-06T16:04:12.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:24.099 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:24.099 Job: Nvme0n1 ended in about 0.39 seconds with error 00:37:24.099 Verification LBA range: start 0x0 length 0x400 00:37:24.099 Nvme0n1 : 0.39 1320.52 82.53 165.06 0.00 41737.34 1897.81 38010.88 00:37:24.099 [2024-12-06T16:04:12.792Z] =================================================================================================================== 00:37:24.099 [2024-12-06T16:04:12.792Z] Total : 1320.52 82.53 165.06 0.00 41737.34 1897.81 38010.88 00:37:24.099 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.099 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:24.099 [2024-12-06 17:04:12.723920] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:24.099 [2024-12-06 17:04:12.723942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b7710 (9): Bad file descriptor 00:37:24.099 [2024-12-06 17:04:12.725083] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:37:24.099 [2024-12-06 17:04:12.725159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:37:24.099 [2024-12-06 17:04:12.725180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:24.099 [2024-12-06 17:04:12.725194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:37:24.099 [2024-12-06 17:04:12.725202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:37:24.099 [2024-12-06 17:04:12.725209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:24.099 [2024-12-06 17:04:12.725217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x24b7710 00:37:24.099 [2024-12-06 17:04:12.725236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b7710 (9): Bad file descriptor 00:37:24.099 [2024-12-06 17:04:12.725249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:37:24.099 [2024-12-06 17:04:12.725257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:37:24.099 [2024-12-06 17:04:12.725266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:37:24.099 [2024-12-06 17:04:12.725274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:37:24.099 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.099 17:04:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:25.480 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2546915 00:37:25.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2546915) - No such process 00:37:25.480 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:25.481 { 00:37:25.481 "params": { 00:37:25.481 "name": "Nvme$subsystem", 00:37:25.481 "trtype": "$TEST_TRANSPORT", 00:37:25.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:25.481 "adrfam": "ipv4", 00:37:25.481 "trsvcid": "$NVMF_PORT", 00:37:25.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:25.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:25.481 "hdgst": ${hdgst:-false}, 00:37:25.481 "ddgst": ${ddgst:-false} 00:37:25.481 }, 00:37:25.481 "method": "bdev_nvme_attach_controller" 00:37:25.481 } 00:37:25.481 EOF 00:37:25.481 )") 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:25.481 17:04:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:25.481 "params": { 00:37:25.481 "name": "Nvme0", 00:37:25.481 "trtype": "tcp", 00:37:25.481 "traddr": "10.0.0.2", 00:37:25.481 "adrfam": "ipv4", 00:37:25.481 "trsvcid": "4420", 00:37:25.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:25.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:25.481 "hdgst": false, 00:37:25.481 "ddgst": false 00:37:25.481 }, 00:37:25.481 "method": "bdev_nvme_attach_controller" 00:37:25.481 }' 00:37:25.481 [2024-12-06 17:04:13.769674] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:37:25.481 [2024-12-06 17:04:13.769728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2547268 ] 00:37:25.481 [2024-12-06 17:04:13.847091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:25.481 [2024-12-06 17:04:13.864051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:25.481 Running I/O for 1 seconds... 00:37:26.417 1830.00 IOPS, 114.38 MiB/s 00:37:26.417 Latency(us) 00:37:26.417 [2024-12-06T16:04:15.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.417 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:26.417 Verification LBA range: start 0x0 length 0x400 00:37:26.417 Nvme0n1 : 1.01 1876.64 117.29 0.00 0.00 33410.49 3413.33 34952.53 00:37:26.417 [2024-12-06T16:04:15.110Z] =================================================================================================================== 00:37:26.417 [2024-12-06T16:04:15.110Z] Total : 1876.64 117.29 0.00 0.00 33410.49 3413.33 34952.53 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:26.676 rmmod nvme_tcp 00:37:26.676 rmmod nvme_fabrics 00:37:26.676 rmmod nvme_keyring 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2546873 ']' 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2546873 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2546873 ']' 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2546873 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2546873 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2546873' 00:37:26.676 killing process with pid 2546873 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2546873 00:37:26.676 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2546873 00:37:26.676 [2024-12-06 17:04:15.356843] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:26.934 17:04:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:28.841 00:37:28.841 real 0m11.221s 00:37:28.841 user 0m16.144s 00:37:28.841 sys 0m5.513s 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:28.841 ************************************ 00:37:28.841 END TEST nvmf_host_management 00:37:28.841 ************************************ 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:28.841 ************************************ 00:37:28.841 START TEST nvmf_lvol 00:37:28.841 ************************************ 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:28.841 * Looking for test storage... 00:37:28.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:37:28.841 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:29.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.101 --rc genhtml_branch_coverage=1 00:37:29.101 --rc genhtml_function_coverage=1 00:37:29.101 --rc genhtml_legend=1 00:37:29.101 --rc geninfo_all_blocks=1 00:37:29.101 --rc geninfo_unexecuted_blocks=1 00:37:29.101 00:37:29.101 ' 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:29.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.101 --rc genhtml_branch_coverage=1 00:37:29.101 --rc genhtml_function_coverage=1 00:37:29.101 --rc genhtml_legend=1 00:37:29.101 --rc geninfo_all_blocks=1 00:37:29.101 --rc geninfo_unexecuted_blocks=1 00:37:29.101 00:37:29.101 ' 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:29.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.101 --rc genhtml_branch_coverage=1 00:37:29.101 --rc genhtml_function_coverage=1 00:37:29.101 --rc genhtml_legend=1 00:37:29.101 --rc geninfo_all_blocks=1 00:37:29.101 --rc geninfo_unexecuted_blocks=1 00:37:29.101 00:37:29.101 ' 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:29.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.101 --rc genhtml_branch_coverage=1 00:37:29.101 --rc genhtml_function_coverage=1 00:37:29.101 --rc genhtml_legend=1 00:37:29.101 --rc geninfo_all_blocks=1 00:37:29.101 --rc geninfo_unexecuted_blocks=1 00:37:29.101 00:37:29.101 ' 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.101 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:29.102 17:04:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:34.396 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:34.396 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:34.396 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:34.397 Found net devices under 0000:31:00.0: cvl_0_0 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:34.397 Found net devices under 0000:31:00.1: cvl_0_1 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:34.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:34.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:37:34.397 00:37:34.397 --- 10.0.0.2 ping statistics --- 00:37:34.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.397 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:34.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:34.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:37:34.397 00:37:34.397 --- 10.0.0.1 ping statistics --- 00:37:34.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:34.397 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:34.397 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2551939 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2551939 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2551939 ']' 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:34.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.398 17:04:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:34.398 [2024-12-06 17:04:22.920340] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:34.398 [2024-12-06 17:04:22.921324] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:37:34.398 [2024-12-06 17:04:22.921361] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:34.398 [2024-12-06 17:04:23.004654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:34.398 [2024-12-06 17:04:23.022597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:34.398 [2024-12-06 17:04:23.022630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:34.398 [2024-12-06 17:04:23.022639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:34.398 [2024-12-06 17:04:23.022646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:34.398 [2024-12-06 17:04:23.022651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:34.398 [2024-12-06 17:04:23.024005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.398 [2024-12-06 17:04:23.024200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:34.398 [2024-12-06 17:04:23.024390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.398 [2024-12-06 17:04:23.074931] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:34.398 [2024-12-06 17:04:23.076037] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:34.398 [2024-12-06 17:04:23.076044] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:34.398 [2024-12-06 17:04:23.076072] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:34.658 [2024-12-06 17:04:23.265227] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:34.658 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:34.917 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:34.917 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:35.177 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:35.177 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:37:35.177 17:04:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:37:35.435 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0f36bc46-7a50-400e-bd6c-a61767c07f46 00:37:35.435 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0f36bc46-7a50-400e-bd6c-a61767c07f46 lvol 20 00:37:35.694 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f85f79ed-09d6-4a7a-afb4-68856bfa3ee2 00:37:35.694 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:35.694 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f85f79ed-09d6-4a7a-afb4-68856bfa3ee2 00:37:35.953 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.211 [2024-12-06 17:04:24.656998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.211 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:36.211 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2552309 00:37:36.211 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:37:36.211 17:04:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:37:37.589 17:04:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f85f79ed-09d6-4a7a-afb4-68856bfa3ee2 MY_SNAPSHOT 00:37:37.589 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ba9523f6-a357-43a2-9ffa-3d94dc3344ed 00:37:37.589 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f85f79ed-09d6-4a7a-afb4-68856bfa3ee2 30 00:37:37.589 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ba9523f6-a357-43a2-9ffa-3d94dc3344ed MY_CLONE 00:37:37.848 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=851604c0-675b-417a-b2f2-5445ea955a60 00:37:37.848 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 851604c0-675b-417a-b2f2-5445ea955a60 00:37:38.416 17:04:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2552309 00:37:48.397 Initializing NVMe Controllers 00:37:48.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:48.397 Controller IO queue size 128, less than required. 00:37:48.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:48.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:48.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:48.397 Initialization complete. Launching workers. 00:37:48.397 ======================================================== 00:37:48.397 Latency(us) 00:37:48.397 Device Information : IOPS MiB/s Average min max 00:37:48.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16348.80 63.86 7831.56 1620.62 56221.11 00:37:48.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16433.60 64.19 7789.75 1701.73 50265.12 00:37:48.397 ======================================================== 00:37:48.397 Total : 32782.40 128.06 7810.60 1620.62 56221.11 00:37:48.397 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f85f79ed-09d6-4a7a-afb4-68856bfa3ee2 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f36bc46-7a50-400e-bd6c-a61767c07f46 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.397 rmmod nvme_tcp 00:37:48.397 rmmod nvme_fabrics 00:37:48.397 rmmod nvme_keyring 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2551939 ']' 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2551939 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2551939 ']' 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2551939 00:37:48.397 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551939 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551939' 00:37:48.398 killing process with pid 2551939 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2551939 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2551939 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.398 17:04:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.335 17:04:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:49.335 00:37:49.335 real 0m20.531s 00:37:49.335 user 0m54.428s 00:37:49.335 sys 0m8.687s 00:37:49.335 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:49.335 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:49.335 ************************************ 00:37:49.335 END TEST nvmf_lvol 00:37:49.335 ************************************ 00:37:49.335 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:49.335 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:49.335 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:49.335 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:49.596 ************************************ 00:37:49.596 START TEST nvmf_lvs_grow 00:37:49.596 ************************************ 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:49.596 * Looking for test storage... 00:37:49.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:49.596 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:49.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.597 --rc genhtml_branch_coverage=1 00:37:49.597 --rc genhtml_function_coverage=1 00:37:49.597 --rc genhtml_legend=1 00:37:49.597 --rc geninfo_all_blocks=1 00:37:49.597 --rc geninfo_unexecuted_blocks=1 00:37:49.597 00:37:49.597 ' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:49.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.597 --rc genhtml_branch_coverage=1 00:37:49.597 --rc genhtml_function_coverage=1 00:37:49.597 --rc genhtml_legend=1 00:37:49.597 --rc geninfo_all_blocks=1 00:37:49.597 --rc geninfo_unexecuted_blocks=1 00:37:49.597 00:37:49.597 ' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:49.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.597 --rc genhtml_branch_coverage=1 00:37:49.597 --rc genhtml_function_coverage=1 00:37:49.597 --rc genhtml_legend=1 00:37:49.597 --rc geninfo_all_blocks=1 00:37:49.597 --rc geninfo_unexecuted_blocks=1 00:37:49.597 00:37:49.597 ' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:49.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:49.597 --rc genhtml_branch_coverage=1 00:37:49.597 --rc genhtml_function_coverage=1 00:37:49.597 --rc genhtml_legend=1 00:37:49.597 --rc geninfo_all_blocks=1 00:37:49.597 --rc geninfo_unexecuted_blocks=1 00:37:49.597 00:37:49.597 ' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:49.597 17:04:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:54.878 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:54.878 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:54.878 Found net devices under 0000:31:00.0: cvl_0_0 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:54.878 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:54.879 Found net devices under 0000:31:00.1: cvl_0_1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:54.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:54.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:37:54.879 00:37:54.879 --- 10.0.0.2 ping statistics --- 00:37:54.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.879 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:54.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:54.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:37:54.879 00:37:54.879 --- 10.0.0.1 ping statistics --- 00:37:54.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:54.879 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2558969 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2558969 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2558969 ']' 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:54.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:54.879 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:54.879 [2024-12-06 17:04:43.520067] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:54.879 [2024-12-06 17:04:43.521224] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:37:54.879 [2024-12-06 17:04:43.521274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:55.139 [2024-12-06 17:04:43.598479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:55.139 [2024-12-06 17:04:43.618660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:55.139 [2024-12-06 17:04:43.618700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:55.139 [2024-12-06 17:04:43.618708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:55.139 [2024-12-06 17:04:43.618714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:55.139 [2024-12-06 17:04:43.618720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:55.139 [2024-12-06 17:04:43.619297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:55.139 [2024-12-06 17:04:43.669601] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:55.139 [2024-12-06 17:04:43.669789] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:55.139 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:55.139 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:55.139 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:55.139 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:55.139 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.139 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:55.139 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:55.399 [2024-12-06 17:04:43.860046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:55.399 ************************************ 00:37:55.399 START TEST lvs_grow_clean 00:37:55.399 ************************************ 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.399 17:04:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:55.658 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:55.658 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:55.658 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c622de88-c805-43b9-b50b-f614032e9d4a 00:37:55.658 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:37:55.658 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:55.918 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:55.919 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:55.919 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c622de88-c805-43b9-b50b-f614032e9d4a lvol 150 00:37:55.919 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b8405511-b325-40cf-be7f-91c792a0e2e2 00:37:55.919 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:55.919 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:56.179 [2024-12-06 17:04:44.723735] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:56.179 [2024-12-06 17:04:44.723879] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:56.179 true 00:37:56.179 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:56.179 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:37:56.439 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:56.439 17:04:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:56.439 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b8405511-b325-40cf-be7f-91c792a0e2e2 00:37:56.697 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:56.697 [2024-12-06 17:04:45.348326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.698 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2559491 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2559491 /var/tmp/bdevperf.sock 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2559491 ']' 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:56.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.957 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:56.957 [2024-12-06 17:04:45.553866] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:37:56.957 [2024-12-06 17:04:45.553919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559491 ] 00:37:56.957 [2024-12-06 17:04:45.631404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:57.217 [2024-12-06 17:04:45.651378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.217 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.217 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:57.217 17:04:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:57.475 Nvme0n1 00:37:57.475 17:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:57.734 [ 00:37:57.734 { 00:37:57.734 "name": "Nvme0n1", 00:37:57.734 "aliases": [ 00:37:57.734 "b8405511-b325-40cf-be7f-91c792a0e2e2" 00:37:57.734 ], 00:37:57.734 "product_name": "NVMe disk", 00:37:57.734 "block_size": 4096, 00:37:57.734 "num_blocks": 38912, 00:37:57.734 "uuid": "b8405511-b325-40cf-be7f-91c792a0e2e2", 00:37:57.734 "numa_id": 0, 00:37:57.734 "assigned_rate_limits": { 00:37:57.734 "rw_ios_per_sec": 0, 00:37:57.734 "rw_mbytes_per_sec": 0, 00:37:57.734 "r_mbytes_per_sec": 0, 00:37:57.734 "w_mbytes_per_sec": 0 00:37:57.734 }, 00:37:57.734 "claimed": false, 00:37:57.734 "zoned": false, 00:37:57.734 "supported_io_types": { 00:37:57.734 "read": true, 00:37:57.734 "write": true, 00:37:57.734 "unmap": true, 00:37:57.734 "flush": true, 00:37:57.734 "reset": true, 00:37:57.734 "nvme_admin": true, 00:37:57.734 "nvme_io": true, 00:37:57.734 "nvme_io_md": false, 00:37:57.734 "write_zeroes": true, 00:37:57.734 "zcopy": false, 00:37:57.734 "get_zone_info": false, 00:37:57.734 "zone_management": false, 00:37:57.734 "zone_append": false, 00:37:57.734 "compare": true, 00:37:57.734 "compare_and_write": true, 00:37:57.734 "abort": true, 00:37:57.734 "seek_hole": false, 00:37:57.734 "seek_data": false, 00:37:57.734 "copy": true, 00:37:57.734 "nvme_iov_md": false 00:37:57.734 }, 00:37:57.734 "memory_domains": [ 00:37:57.734 { 00:37:57.734 "dma_device_id": "system", 00:37:57.734 "dma_device_type": 1 00:37:57.734 } 00:37:57.734 ], 00:37:57.734 "driver_specific": { 00:37:57.734 "nvme": [ 00:37:57.734 { 00:37:57.734 "trid": { 00:37:57.734 "trtype": "TCP", 00:37:57.734 "adrfam": "IPv4", 00:37:57.734 "traddr": "10.0.0.2", 00:37:57.734 "trsvcid": "4420", 00:37:57.734 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:57.734 }, 00:37:57.734 "ctrlr_data": { 00:37:57.734 "cntlid": 1, 00:37:57.734 "vendor_id": "0x8086", 00:37:57.734 "model_number": "SPDK bdev Controller", 00:37:57.734 "serial_number": "SPDK0", 00:37:57.734 "firmware_revision": "25.01", 00:37:57.734 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.734 "oacs": { 00:37:57.734 "security": 0, 00:37:57.734 "format": 0, 00:37:57.734 "firmware": 0, 00:37:57.734 "ns_manage": 0 00:37:57.734 }, 00:37:57.734 "multi_ctrlr": true, 00:37:57.734 "ana_reporting": false 00:37:57.734 }, 00:37:57.734 "vs": { 00:37:57.734 "nvme_version": "1.3" 00:37:57.734 }, 00:37:57.734 "ns_data": { 00:37:57.734 "id": 1, 00:37:57.734 "can_share": true 00:37:57.734 } 00:37:57.734 } 00:37:57.734 ], 00:37:57.734 "mp_policy": "active_passive" 00:37:57.734 } 00:37:57.734 } 00:37:57.734 ] 00:37:57.734 17:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2559678 00:37:57.734 17:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:57.734 17:04:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:57.734 Running I/O for 10 seconds... 00:37:58.669 Latency(us) 00:37:58.669 [2024-12-06T16:04:47.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:58.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:58.669 Nvme0n1 : 1.00 17656.00 68.97 0.00 0.00 0.00 0.00 0.00 00:37:58.669 [2024-12-06T16:04:47.362Z] =================================================================================================================== 00:37:58.669 [2024-12-06T16:04:47.362Z] Total : 17656.00 68.97 0.00 0.00 0.00 0.00 0.00 00:37:58.669 00:37:59.604 17:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c622de88-c805-43b9-b50b-f614032e9d4a 00:37:59.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:59.863 Nvme0n1 : 2.00 17908.50 69.96 0.00 0.00 0.00 0.00 0.00 00:37:59.863 [2024-12-06T16:04:48.556Z] =================================================================================================================== 00:37:59.863 [2024-12-06T16:04:48.556Z] Total : 17908.50 69.96 0.00 0.00 0.00 0.00 0.00 00:37:59.863 00:37:59.863 true 00:37:59.863 17:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:37:59.863 17:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:00.122 17:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:00.122 17:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:00.122 17:04:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2559678 00:38:00.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:00.692 Nvme0n1 : 3.00 17992.67 70.28 0.00 0.00 0.00 0.00 0.00 00:38:00.692 [2024-12-06T16:04:49.385Z] =================================================================================================================== 00:38:00.692 [2024-12-06T16:04:49.385Z] Total : 17992.67 70.28 0.00 0.00 0.00 0.00 0.00 00:38:00.692 00:38:02.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:02.071 Nvme0n1 : 4.00 19067.00 74.48 0.00 0.00 0.00 0.00 0.00 00:38:02.071 [2024-12-06T16:04:50.764Z] =================================================================================================================== 00:38:02.071 [2024-12-06T16:04:50.764Z] Total : 19067.00 74.48 0.00 0.00 0.00 0.00 0.00 00:38:02.071 00:38:03.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.010 Nvme0n1 : 5.00 20384.80 79.63 0.00 0.00 0.00 0.00 0.00 00:38:03.010 [2024-12-06T16:04:51.703Z] =================================================================================================================== 00:38:03.010 [2024-12-06T16:04:51.703Z] Total : 20384.80 79.63 0.00 0.00 0.00 0.00 0.00 00:38:03.010 00:38:03.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:03.947 Nvme0n1 : 6.00 21274.33 83.10 0.00 0.00 0.00 0.00 0.00 00:38:03.947 [2024-12-06T16:04:52.640Z] =================================================================================================================== 00:38:03.947 [2024-12-06T16:04:52.640Z] Total : 21274.33 83.10 0.00 0.00 0.00 0.00 0.00 00:38:03.947 00:38:04.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:04.885 Nvme0n1 : 7.00 21900.57 85.55 0.00 0.00 0.00 0.00 0.00 00:38:04.885 [2024-12-06T16:04:53.578Z] =================================================================================================================== 00:38:04.885 [2024-12-06T16:04:53.578Z] Total : 21900.57 85.55 0.00 0.00 0.00 0.00 0.00 00:38:04.885 00:38:05.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:05.825 Nvme0n1 : 8.00 22378.38 87.42 0.00 0.00 0.00 0.00 0.00 00:38:05.825 [2024-12-06T16:04:54.518Z] =================================================================================================================== 00:38:05.825 [2024-12-06T16:04:54.518Z] Total : 22378.38 87.42 0.00 0.00 0.00 0.00 0.00 00:38:05.825 00:38:06.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:06.764 Nvme0n1 : 9.00 22757.00 88.89 0.00 0.00 0.00 0.00 0.00 00:38:06.764 [2024-12-06T16:04:55.457Z] =================================================================================================================== 00:38:06.764 [2024-12-06T16:04:55.457Z] Total : 22757.00 88.89 0.00 0.00 0.00 0.00 0.00 00:38:06.764 00:38:07.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.701 Nvme0n1 : 10.00 23054.10 90.06 0.00 0.00 0.00 0.00 0.00 00:38:07.701 [2024-12-06T16:04:56.394Z] =================================================================================================================== 00:38:07.701 [2024-12-06T16:04:56.394Z] Total : 23054.10 90.06 0.00 0.00 0.00 0.00 0.00 00:38:07.701 00:38:07.701 00:38:07.701 Latency(us) 00:38:07.701 [2024-12-06T16:04:56.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:07.701 Nvme0n1 : 10.01 23060.39 90.08 0.00 0.00 5547.81 2239.15 14527.15 00:38:07.701 [2024-12-06T16:04:56.394Z] =================================================================================================================== 00:38:07.701 [2024-12-06T16:04:56.394Z] Total : 23060.39 90.08 0.00 0.00 5547.81 2239.15 14527.15 00:38:07.701 { 00:38:07.701 "results": [ 00:38:07.701 { 00:38:07.701 "job": "Nvme0n1", 00:38:07.701 "core_mask": "0x2", 00:38:07.701 "workload": "randwrite", 00:38:07.701 "status": "finished", 00:38:07.701 "queue_depth": 128, 00:38:07.701 "io_size": 4096, 00:38:07.701 "runtime": 10.005556, 00:38:07.701 "iops": 23060.387648622425, 00:38:07.701 "mibps": 90.07963925243135, 00:38:07.701 "io_failed": 0, 00:38:07.701 "io_timeout": 0, 00:38:07.701 "avg_latency_us": 5547.811062069124, 00:38:07.701 "min_latency_us": 2239.1466666666665, 00:38:07.701 "max_latency_us": 14527.146666666667 00:38:07.701 } 00:38:07.701 ], 00:38:07.701 "core_count": 1 00:38:07.701 } 00:38:07.701 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2559491 00:38:07.701 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2559491 ']' 00:38:07.701 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2559491 00:38:07.701 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:07.701 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:07.701 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2559491 00:38:07.960 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:07.960 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:07.960 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2559491' 00:38:07.960 killing process with pid 2559491 00:38:07.960 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2559491 00:38:07.960 Received shutdown signal, test time was about 10.000000 seconds 00:38:07.960 00:38:07.960 Latency(us) 00:38:07.960 [2024-12-06T16:04:56.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:07.960 [2024-12-06T16:04:56.653Z] =================================================================================================================== 00:38:07.960 [2024-12-06T16:04:56.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:07.960 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2559491 00:38:07.960 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:08.219 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:08.219 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:38:08.219 17:04:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:08.495 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:08.495 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:08.495 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:08.822 [2024-12-06 17:04:57.183800] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:38:08.822 request: 00:38:08.822 { 00:38:08.822 "uuid": "c622de88-c805-43b9-b50b-f614032e9d4a", 00:38:08.822 "method": "bdev_lvol_get_lvstores", 00:38:08.822 "req_id": 1 00:38:08.822 } 00:38:08.822 Got JSON-RPC error response 00:38:08.822 response: 00:38:08.822 { 00:38:08.822 "code": -19, 00:38:08.822 "message": "No such device" 00:38:08.822 } 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:08.822 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:09.105 aio_bdev 00:38:09.105 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b8405511-b325-40cf-be7f-91c792a0e2e2 00:38:09.105 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b8405511-b325-40cf-be7f-91c792a0e2e2 00:38:09.105 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:09.106 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:09.106 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:09.106 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:09.106 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:09.106 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b8405511-b325-40cf-be7f-91c792a0e2e2 -t 2000 00:38:09.393 [ 00:38:09.393 { 00:38:09.393 "name": "b8405511-b325-40cf-be7f-91c792a0e2e2", 00:38:09.393 "aliases": [ 00:38:09.393 "lvs/lvol" 00:38:09.393 ], 00:38:09.393 "product_name": "Logical Volume", 00:38:09.393 "block_size": 4096, 00:38:09.393 "num_blocks": 38912, 00:38:09.393 "uuid": "b8405511-b325-40cf-be7f-91c792a0e2e2", 00:38:09.393 "assigned_rate_limits": { 00:38:09.393 "rw_ios_per_sec": 0, 00:38:09.393 "rw_mbytes_per_sec": 0, 00:38:09.393 "r_mbytes_per_sec": 0, 00:38:09.393 "w_mbytes_per_sec": 0 00:38:09.393 }, 00:38:09.393 "claimed": false, 00:38:09.393 "zoned": false, 00:38:09.393 "supported_io_types": { 00:38:09.393 "read": true, 00:38:09.393 "write": true, 00:38:09.393 "unmap": true, 00:38:09.393 "flush": false, 00:38:09.393 "reset": true, 00:38:09.393 "nvme_admin": false, 00:38:09.393 "nvme_io": false, 00:38:09.393 "nvme_io_md": false, 00:38:09.393 "write_zeroes": true, 00:38:09.393 "zcopy": false, 00:38:09.393 "get_zone_info": false, 00:38:09.393 "zone_management": false, 00:38:09.393 "zone_append": false, 00:38:09.393 "compare": false, 00:38:09.393 "compare_and_write": false, 00:38:09.393 "abort": false, 00:38:09.393 "seek_hole": true, 00:38:09.393 "seek_data": true, 00:38:09.393 "copy": false, 00:38:09.393 "nvme_iov_md": false 00:38:09.393 }, 00:38:09.393 "driver_specific": { 00:38:09.393 "lvol": { 00:38:09.393 "lvol_store_uuid": "c622de88-c805-43b9-b50b-f614032e9d4a", 00:38:09.393 "base_bdev": "aio_bdev", 00:38:09.393 "thin_provision": false, 00:38:09.393 "num_allocated_clusters": 38, 00:38:09.393 "snapshot": false, 00:38:09.393 "clone": false, 00:38:09.393 "esnap_clone": false 00:38:09.393 } 00:38:09.393 } 00:38:09.393 } 00:38:09.393 ] 00:38:09.393 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:09.393 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:38:09.393 17:04:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:09.393 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:09.393 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c622de88-c805-43b9-b50b-f614032e9d4a 00:38:09.393 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:09.652 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:09.652 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b8405511-b325-40cf-be7f-91c792a0e2e2 00:38:09.652 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c622de88-c805-43b9-b50b-f614032e9d4a 00:38:09.912 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.172 00:38:10.172 real 0m14.773s 00:38:10.172 user 0m14.361s 00:38:10.172 sys 0m1.170s 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:10.172 ************************************ 00:38:10.172 END TEST lvs_grow_clean 00:38:10.172 ************************************ 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:10.172 ************************************ 00:38:10.172 START TEST lvs_grow_dirty 00:38:10.172 ************************************ 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.172 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:10.431 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:10.431 17:04:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:10.431 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=604a8360-409c-478b-8e03-7ffbe3069aba 00:38:10.431 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:10.431 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:10.690 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:10.690 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:10.690 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 604a8360-409c-478b-8e03-7ffbe3069aba lvol 150 00:38:10.690 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f167afd8-5965-4b30-ab38-d990d51df4f9 00:38:10.690 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:10.690 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:10.950 [2024-12-06 17:04:59.511734] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:10.950 [2024-12-06 17:04:59.511874] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:10.950 true 00:38:10.950 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:10.950 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:11.209 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:11.209 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:11.209 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f167afd8-5965-4b30-ab38-d990d51df4f9 00:38:11.469 17:04:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:11.469 [2024-12-06 17:05:00.140303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.469 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2562733 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2562733 /var/tmp/bdevperf.sock 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2562733 ']' 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:11.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:11.730 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:11.730 [2024-12-06 17:05:00.335076] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:11.730 [2024-12-06 17:05:00.335120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562733 ] 00:38:11.730 [2024-12-06 17:05:00.389804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.730 [2024-12-06 17:05:00.406180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:11.990 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:11.990 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:11.990 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:11.990 Nvme0n1 00:38:11.990 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:12.250 [ 00:38:12.250 { 00:38:12.250 "name": "Nvme0n1", 00:38:12.250 "aliases": [ 00:38:12.250 "f167afd8-5965-4b30-ab38-d990d51df4f9" 00:38:12.250 ], 00:38:12.250 "product_name": "NVMe disk", 00:38:12.250 "block_size": 4096, 00:38:12.250 "num_blocks": 38912, 00:38:12.250 "uuid": "f167afd8-5965-4b30-ab38-d990d51df4f9", 00:38:12.250 "numa_id": 0, 00:38:12.250 "assigned_rate_limits": { 00:38:12.250 "rw_ios_per_sec": 0, 00:38:12.250 "rw_mbytes_per_sec": 0, 00:38:12.250 "r_mbytes_per_sec": 0, 00:38:12.250 "w_mbytes_per_sec": 0 00:38:12.250 }, 00:38:12.250 "claimed": false, 00:38:12.250 "zoned": false, 00:38:12.250 "supported_io_types": { 00:38:12.250 "read": true, 00:38:12.250 "write": true, 00:38:12.250 "unmap": true, 00:38:12.250 "flush": true, 00:38:12.250 "reset": true, 00:38:12.250 "nvme_admin": true, 00:38:12.250 "nvme_io": true, 00:38:12.250 "nvme_io_md": false, 00:38:12.250 "write_zeroes": true, 00:38:12.250 "zcopy": false, 00:38:12.250 "get_zone_info": false, 00:38:12.250 "zone_management": false, 00:38:12.250 "zone_append": false, 00:38:12.250 "compare": true, 00:38:12.250 "compare_and_write": true, 00:38:12.250 "abort": true, 00:38:12.250 "seek_hole": false, 00:38:12.250 "seek_data": false, 00:38:12.250 "copy": true, 00:38:12.250 "nvme_iov_md": false 00:38:12.250 }, 00:38:12.250 "memory_domains": [ 00:38:12.250 { 00:38:12.250 "dma_device_id": "system", 00:38:12.250 "dma_device_type": 1 00:38:12.250 } 00:38:12.250 ], 00:38:12.250 "driver_specific": { 00:38:12.250 "nvme": [ 00:38:12.250 { 00:38:12.250 "trid": { 00:38:12.250 "trtype": "TCP", 00:38:12.250 "adrfam": "IPv4", 00:38:12.250 "traddr": "10.0.0.2", 00:38:12.250 "trsvcid": "4420", 00:38:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:12.250 }, 00:38:12.250 "ctrlr_data": { 00:38:12.250 "cntlid": 1, 00:38:12.250 "vendor_id": "0x8086", 00:38:12.250 "model_number": "SPDK bdev Controller", 00:38:12.250 "serial_number": "SPDK0", 00:38:12.250 "firmware_revision": "25.01", 00:38:12.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.250 "oacs": { 00:38:12.250 "security": 0, 00:38:12.250 "format": 0, 00:38:12.250 "firmware": 0, 00:38:12.250 "ns_manage": 0 00:38:12.250 }, 00:38:12.250 "multi_ctrlr": true, 00:38:12.250 "ana_reporting": false 00:38:12.250 }, 00:38:12.250 "vs": { 00:38:12.250 "nvme_version": "1.3" 00:38:12.250 }, 00:38:12.250 "ns_data": { 00:38:12.250 "id": 1, 00:38:12.250 "can_share": true 00:38:12.250 } 00:38:12.250 } 00:38:12.250 ], 00:38:12.250 "mp_policy": "active_passive" 00:38:12.250 } 00:38:12.250 } 00:38:12.250 ] 00:38:12.251 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2562739 00:38:12.251 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:12.251 17:05:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:12.251 Running I/O for 10 seconds... 00:38:13.629 Latency(us) 00:38:13.629 [2024-12-06T16:05:02.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:13.629 Nvme0n1 : 1.00 25348.00 99.02 0.00 0.00 0.00 0.00 0.00 00:38:13.629 [2024-12-06T16:05:02.322Z] =================================================================================================================== 00:38:13.629 [2024-12-06T16:05:02.322Z] Total : 25348.00 99.02 0.00 0.00 0.00 0.00 0.00 00:38:13.629 00:38:14.196 17:05:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:14.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:14.455 Nvme0n1 : 2.00 25438.50 99.37 0.00 0.00 0.00 0.00 0.00 00:38:14.455 [2024-12-06T16:05:03.148Z] =================================================================================================================== 00:38:14.455 [2024-12-06T16:05:03.148Z] Total : 25438.50 99.37 0.00 0.00 0.00 0.00 0.00 00:38:14.455 00:38:14.455 true 00:38:14.455 17:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:14.455 17:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:14.714 17:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:14.714 17:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:14.714 17:05:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2562739 00:38:15.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:15.282 Nvme0n1 : 3.00 25489.33 99.57 0.00 0.00 0.00 0.00 0.00 00:38:15.282 [2024-12-06T16:05:03.975Z] =================================================================================================================== 00:38:15.282 [2024-12-06T16:05:03.975Z] Total : 25489.33 99.57 0.00 0.00 0.00 0.00 0.00 00:38:15.282 00:38:16.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:16.220 Nvme0n1 : 4.00 25532.00 99.73 0.00 0.00 0.00 0.00 0.00 00:38:16.220 [2024-12-06T16:05:04.913Z] =================================================================================================================== 00:38:16.220 [2024-12-06T16:05:04.913Z] Total : 25532.00 99.73 0.00 0.00 0.00 0.00 0.00 00:38:16.220 00:38:17.598 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:17.598 Nvme0n1 : 5.00 25569.40 99.88 0.00 0.00 0.00 0.00 0.00 00:38:17.598 [2024-12-06T16:05:06.291Z] =================================================================================================================== 00:38:17.598 [2024-12-06T16:05:06.291Z] Total : 25569.40 99.88 0.00 0.00 0.00 0.00 0.00 00:38:17.598 00:38:18.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:18.533 Nvme0n1 : 6.00 25594.50 99.98 0.00 0.00 0.00 0.00 0.00 00:38:18.533 [2024-12-06T16:05:07.226Z] =================================================================================================================== 00:38:18.533 [2024-12-06T16:05:07.226Z] Total : 25594.50 99.98 0.00 0.00 0.00 0.00 0.00 00:38:18.533 00:38:19.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:19.469 Nvme0n1 : 7.00 25612.71 100.05 0.00 0.00 0.00 0.00 0.00 00:38:19.469 [2024-12-06T16:05:08.162Z] =================================================================================================================== 00:38:19.469 [2024-12-06T16:05:08.162Z] Total : 25612.71 100.05 0.00 0.00 0.00 0.00 0.00 00:38:19.469 00:38:20.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:20.411 Nvme0n1 : 8.00 25626.00 100.10 0.00 0.00 0.00 0.00 0.00 00:38:20.411 [2024-12-06T16:05:09.104Z] =================================================================================================================== 00:38:20.411 [2024-12-06T16:05:09.104Z] Total : 25626.00 100.10 0.00 0.00 0.00 0.00 0.00 00:38:20.411 00:38:21.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:21.347 Nvme0n1 : 9.00 25643.67 100.17 0.00 0.00 0.00 0.00 0.00 00:38:21.347 [2024-12-06T16:05:10.040Z] =================================================================================================================== 00:38:21.347 [2024-12-06T16:05:10.040Z] Total : 25643.67 100.17 0.00 0.00 0.00 0.00 0.00 00:38:21.347 00:38:22.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.283 Nvme0n1 : 10.00 25646.00 100.18 0.00 0.00 0.00 0.00 0.00 00:38:22.283 [2024-12-06T16:05:10.976Z] =================================================================================================================== 00:38:22.283 [2024-12-06T16:05:10.976Z] Total : 25646.00 100.18 0.00 0.00 0.00 0.00 0.00 00:38:22.283 00:38:22.283 00:38:22.283 Latency(us) 00:38:22.283 [2024-12-06T16:05:10.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:22.283 Nvme0n1 : 10.00 25646.74 100.18 0.00 0.00 4987.84 1658.88 9885.01 00:38:22.283 [2024-12-06T16:05:10.976Z] =================================================================================================================== 00:38:22.283 [2024-12-06T16:05:10.976Z] Total : 25646.74 100.18 0.00 0.00 4987.84 1658.88 9885.01 00:38:22.283 { 00:38:22.283 "results": [ 00:38:22.283 { 00:38:22.283 "job": "Nvme0n1", 00:38:22.283 "core_mask": "0x2", 00:38:22.283 "workload": "randwrite", 00:38:22.283 "status": "finished", 00:38:22.283 "queue_depth": 128, 00:38:22.283 "io_size": 4096, 00:38:22.283 "runtime": 10.002205, 00:38:22.283 "iops": 25646.744892751147, 00:38:22.283 "mibps": 100.18259723730917, 00:38:22.283 "io_failed": 0, 00:38:22.283 "io_timeout": 0, 00:38:22.283 "avg_latency_us": 4987.840154683383, 00:38:22.283 "min_latency_us": 1658.88, 00:38:22.283 "max_latency_us": 9885.013333333334 00:38:22.283 } 00:38:22.283 ], 00:38:22.283 "core_count": 1 00:38:22.283 } 00:38:22.283 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2562733 00:38:22.283 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2562733 ']' 00:38:22.283 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2562733 00:38:22.283 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:22.283 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:22.283 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2562733 00:38:22.542 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:22.542 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:22.542 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2562733' 00:38:22.542 killing process with pid 2562733 00:38:22.542 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2562733 00:38:22.542 Received shutdown signal, test time was about 10.000000 seconds 00:38:22.542 00:38:22.542 Latency(us) 00:38:22.542 [2024-12-06T16:05:11.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:22.542 [2024-12-06T16:05:11.235Z] =================================================================================================================== 00:38:22.542 [2024-12-06T16:05:11.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:22.542 17:05:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2562733 00:38:22.542 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:22.802 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:22.802 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:22.802 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2558969 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2558969 00:38:23.062 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2558969 Killed "${NVMF_APP[@]}" "$@" 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2565071 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2565071 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2565071 ']' 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:23.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:23.062 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:23.062 [2024-12-06 17:05:11.616222] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:23.062 [2024-12-06 17:05:11.617223] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:23.062 [2024-12-06 17:05:11.617265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:23.062 [2024-12-06 17:05:11.690410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.062 [2024-12-06 17:05:11.705797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:23.062 [2024-12-06 17:05:11.705825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:23.062 [2024-12-06 17:05:11.705830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:23.062 [2024-12-06 17:05:11.705835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:23.062 [2024-12-06 17:05:11.705839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:23.062 [2024-12-06 17:05:11.706251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.062 [2024-12-06 17:05:11.751848] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:23.062 [2024-12-06 17:05:11.752027] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:23.321 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:23.322 [2024-12-06 17:05:11.937041] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:23.322 [2024-12-06 17:05:11.937126] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:23.322 [2024-12-06 17:05:11.937150] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f167afd8-5965-4b30-ab38-d990d51df4f9 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f167afd8-5965-4b30-ab38-d990d51df4f9 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:23.322 17:05:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:23.581 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f167afd8-5965-4b30-ab38-d990d51df4f9 -t 2000 00:38:23.581 [ 00:38:23.581 { 00:38:23.581 "name": "f167afd8-5965-4b30-ab38-d990d51df4f9", 00:38:23.581 "aliases": [ 00:38:23.581 "lvs/lvol" 00:38:23.581 ], 00:38:23.581 "product_name": "Logical Volume", 00:38:23.581 "block_size": 4096, 00:38:23.581 "num_blocks": 38912, 00:38:23.581 "uuid": "f167afd8-5965-4b30-ab38-d990d51df4f9", 00:38:23.581 "assigned_rate_limits": { 00:38:23.581 "rw_ios_per_sec": 0, 00:38:23.581 "rw_mbytes_per_sec": 0, 00:38:23.581 "r_mbytes_per_sec": 0, 00:38:23.581 "w_mbytes_per_sec": 0 00:38:23.581 }, 00:38:23.581 "claimed": false, 00:38:23.581 "zoned": false, 00:38:23.581 "supported_io_types": { 00:38:23.581 "read": true, 00:38:23.581 "write": true, 00:38:23.581 "unmap": true, 00:38:23.581 "flush": false, 00:38:23.581 "reset": true, 00:38:23.581 "nvme_admin": false, 00:38:23.581 "nvme_io": false, 00:38:23.581 "nvme_io_md": false, 00:38:23.581 "write_zeroes": true, 00:38:23.581 "zcopy": false, 00:38:23.581 "get_zone_info": false, 00:38:23.581 "zone_management": false, 00:38:23.581 "zone_append": false, 00:38:23.581 "compare": false, 00:38:23.581 "compare_and_write": false, 00:38:23.581 "abort": false, 00:38:23.581 "seek_hole": true, 00:38:23.581 "seek_data": true, 00:38:23.581 "copy": false, 00:38:23.581 "nvme_iov_md": false 00:38:23.581 }, 00:38:23.581 "driver_specific": { 00:38:23.581 "lvol": { 00:38:23.581 "lvol_store_uuid": "604a8360-409c-478b-8e03-7ffbe3069aba", 00:38:23.581 "base_bdev": "aio_bdev", 00:38:23.581 "thin_provision": false, 00:38:23.581 "num_allocated_clusters": 38, 00:38:23.581 "snapshot": false, 00:38:23.581 "clone": false, 00:38:23.581 "esnap_clone": false 00:38:23.581 } 00:38:23.581 } 00:38:23.581 } 00:38:23.581 ] 00:38:23.581 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:23.581 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:23.581 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:23.841 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:23.841 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:23.841 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:24.100 [2024-12-06 17:05:12.714718] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:24.100 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:24.359 request: 00:38:24.359 { 00:38:24.359 "uuid": "604a8360-409c-478b-8e03-7ffbe3069aba", 00:38:24.359 "method": "bdev_lvol_get_lvstores", 00:38:24.359 "req_id": 1 00:38:24.359 } 00:38:24.359 Got JSON-RPC error response 00:38:24.359 response: 00:38:24.359 { 00:38:24.359 "code": -19, 00:38:24.359 "message": "No such device" 00:38:24.359 } 00:38:24.359 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:24.359 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:24.359 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:24.359 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:24.359 17:05:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:24.359 aio_bdev 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f167afd8-5965-4b30-ab38-d990d51df4f9 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f167afd8-5965-4b30-ab38-d990d51df4f9 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:24.620 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f167afd8-5965-4b30-ab38-d990d51df4f9 -t 2000 00:38:24.879 [ 00:38:24.879 { 00:38:24.879 "name": "f167afd8-5965-4b30-ab38-d990d51df4f9", 00:38:24.879 "aliases": [ 00:38:24.879 "lvs/lvol" 00:38:24.879 ], 00:38:24.879 "product_name": "Logical Volume", 00:38:24.879 "block_size": 4096, 00:38:24.879 "num_blocks": 38912, 00:38:24.879 "uuid": "f167afd8-5965-4b30-ab38-d990d51df4f9", 00:38:24.879 "assigned_rate_limits": { 00:38:24.879 "rw_ios_per_sec": 0, 00:38:24.879 "rw_mbytes_per_sec": 0, 00:38:24.879 "r_mbytes_per_sec": 0, 00:38:24.879 "w_mbytes_per_sec": 0 00:38:24.879 }, 00:38:24.879 "claimed": false, 00:38:24.879 "zoned": false, 00:38:24.879 "supported_io_types": { 00:38:24.879 "read": true, 00:38:24.879 "write": true, 00:38:24.879 "unmap": true, 00:38:24.879 "flush": false, 00:38:24.879 "reset": true, 00:38:24.879 "nvme_admin": false, 00:38:24.879 "nvme_io": false, 00:38:24.879 "nvme_io_md": false, 00:38:24.879 "write_zeroes": true, 00:38:24.879 "zcopy": false, 00:38:24.879 "get_zone_info": false, 00:38:24.879 "zone_management": false, 00:38:24.879 "zone_append": false, 00:38:24.879 "compare": false, 00:38:24.879 "compare_and_write": false, 00:38:24.879 "abort": false, 00:38:24.879 "seek_hole": true, 00:38:24.879 "seek_data": true, 00:38:24.879 "copy": false, 00:38:24.879 "nvme_iov_md": false 00:38:24.879 }, 00:38:24.879 "driver_specific": { 00:38:24.879 "lvol": { 00:38:24.879 "lvol_store_uuid": "604a8360-409c-478b-8e03-7ffbe3069aba", 00:38:24.879 "base_bdev": "aio_bdev", 00:38:24.879 "thin_provision": false, 00:38:24.879 "num_allocated_clusters": 38, 00:38:24.879 "snapshot": false, 00:38:24.879 "clone": false, 00:38:24.879 "esnap_clone": false 00:38:24.879 } 00:38:24.879 } 00:38:24.879 } 00:38:24.879 ] 00:38:24.879 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:24.879 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:24.879 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:24.879 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:24.880 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:24.880 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:25.139 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:25.139 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f167afd8-5965-4b30-ab38-d990d51df4f9 00:38:25.139 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 604a8360-409c-478b-8e03-7ffbe3069aba 00:38:25.398 17:05:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:25.658 00:38:25.658 real 0m15.445s 00:38:25.658 user 0m33.807s 00:38:25.658 sys 0m2.619s 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:25.658 ************************************ 00:38:25.658 END TEST lvs_grow_dirty 00:38:25.658 ************************************ 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:25.658 nvmf_trace.0 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:25.658 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:25.658 rmmod nvme_tcp 00:38:25.658 rmmod nvme_fabrics 00:38:25.658 rmmod nvme_keyring 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2565071 ']' 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2565071 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2565071 ']' 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2565071 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2565071 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2565071' 00:38:25.659 killing process with pid 2565071 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2565071 00:38:25.659 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2565071 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.919 17:05:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.822 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:27.822 00:38:27.822 real 0m38.438s 00:38:27.822 user 0m50.114s 00:38:27.822 sys 0m8.093s 00:38:27.822 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:27.822 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:27.822 ************************************ 00:38:27.822 END TEST nvmf_lvs_grow 00:38:27.822 ************************************ 00:38:27.822 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:27.822 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:27.822 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:27.822 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:28.081 ************************************ 00:38:28.081 START TEST nvmf_bdev_io_wait 00:38:28.081 ************************************ 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:28.081 * Looking for test storage... 00:38:28.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:28.081 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.082 --rc genhtml_branch_coverage=1 00:38:28.082 --rc genhtml_function_coverage=1 00:38:28.082 --rc genhtml_legend=1 00:38:28.082 --rc geninfo_all_blocks=1 00:38:28.082 --rc geninfo_unexecuted_blocks=1 00:38:28.082 00:38:28.082 ' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.082 --rc genhtml_branch_coverage=1 00:38:28.082 --rc genhtml_function_coverage=1 00:38:28.082 --rc genhtml_legend=1 00:38:28.082 --rc geninfo_all_blocks=1 00:38:28.082 --rc geninfo_unexecuted_blocks=1 00:38:28.082 00:38:28.082 ' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.082 --rc genhtml_branch_coverage=1 00:38:28.082 --rc genhtml_function_coverage=1 00:38:28.082 --rc genhtml_legend=1 00:38:28.082 --rc geninfo_all_blocks=1 00:38:28.082 --rc geninfo_unexecuted_blocks=1 00:38:28.082 00:38:28.082 ' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:28.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:28.082 --rc genhtml_branch_coverage=1 00:38:28.082 --rc genhtml_function_coverage=1 00:38:28.082 --rc genhtml_legend=1 00:38:28.082 --rc geninfo_all_blocks=1 00:38:28.082 --rc geninfo_unexecuted_blocks=1 00:38:28.082 00:38:28.082 ' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:28.082 17:05:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:33.357 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:33.357 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:33.357 Found net devices under 0000:31:00.0: cvl_0_0 00:38:33.357 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:33.358 Found net devices under 0000:31:00.1: cvl_0_1 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:33.358 17:05:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:33.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:38:33.618 00:38:33.618 --- 10.0.0.2 ping statistics --- 00:38:33.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.618 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:33.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:38:33.618 00:38:33.618 --- 10.0.0.1 ping statistics --- 00:38:33.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.618 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2570096 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2570096 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2570096 ']' 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:33.618 17:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:38:33.618 [2024-12-06 17:05:22.240273] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:33.618 [2024-12-06 17:05:22.241299] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:33.618 [2024-12-06 17:05:22.241340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.878 [2024-12-06 17:05:22.327507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:33.878 [2024-12-06 17:05:22.355500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.878 [2024-12-06 17:05:22.355550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.878 [2024-12-06 17:05:22.355559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.878 [2024-12-06 17:05:22.355567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.878 [2024-12-06 17:05:22.355573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.878 [2024-12-06 17:05:22.357765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.879 [2024-12-06 17:05:22.357926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:33.879 [2024-12-06 17:05:22.358079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.879 [2024-12-06 17:05:22.358079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:33.879 [2024-12-06 17:05:22.358402] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.447 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.447 [2024-12-06 17:05:23.130532] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.447 [2024-12-06 17:05:23.130963] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:34.447 [2024-12-06 17:05:23.131445] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:34.447 [2024-12-06 17:05:23.131590] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:34.448 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.448 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:34.448 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.448 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.448 [2024-12-06 17:05:23.138637] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.708 Malloc0 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:34.708 [2024-12-06 17:05:23.186774] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2570160 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2570162 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2570163 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2570165 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.708 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.708 { 00:38:34.708 "params": { 00:38:34.708 "name": "Nvme$subsystem", 00:38:34.708 "trtype": "$TEST_TRANSPORT", 00:38:34.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.708 "adrfam": "ipv4", 00:38:34.708 "trsvcid": "$NVMF_PORT", 00:38:34.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.709 "hdgst": ${hdgst:-false}, 00:38:34.709 "ddgst": ${ddgst:-false} 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 } 00:38:34.709 EOF 00:38:34.709 )") 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.709 { 00:38:34.709 "params": { 00:38:34.709 "name": "Nvme$subsystem", 00:38:34.709 "trtype": "$TEST_TRANSPORT", 00:38:34.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.709 "adrfam": "ipv4", 00:38:34.709 "trsvcid": "$NVMF_PORT", 00:38:34.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.709 "hdgst": ${hdgst:-false}, 00:38:34.709 "ddgst": ${ddgst:-false} 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 } 00:38:34.709 EOF 00:38:34.709 )") 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.709 { 00:38:34.709 "params": { 00:38:34.709 "name": "Nvme$subsystem", 00:38:34.709 "trtype": "$TEST_TRANSPORT", 00:38:34.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.709 "adrfam": "ipv4", 00:38:34.709 "trsvcid": "$NVMF_PORT", 00:38:34.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.709 "hdgst": ${hdgst:-false}, 00:38:34.709 "ddgst": ${ddgst:-false} 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 } 00:38:34.709 EOF 00:38:34.709 )") 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:34.709 { 00:38:34.709 "params": { 00:38:34.709 "name": "Nvme$subsystem", 00:38:34.709 "trtype": "$TEST_TRANSPORT", 00:38:34.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.709 "adrfam": "ipv4", 00:38:34.709 "trsvcid": "$NVMF_PORT", 00:38:34.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.709 "hdgst": ${hdgst:-false}, 00:38:34.709 "ddgst": ${ddgst:-false} 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 } 00:38:34.709 EOF 00:38:34.709 )") 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2570160 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.709 "params": { 00:38:34.709 "name": "Nvme1", 00:38:34.709 "trtype": "tcp", 00:38:34.709 "traddr": "10.0.0.2", 00:38:34.709 "adrfam": "ipv4", 00:38:34.709 "trsvcid": "4420", 00:38:34.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.709 "hdgst": false, 00:38:34.709 "ddgst": false 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 }' 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.709 "params": { 00:38:34.709 "name": "Nvme1", 00:38:34.709 "trtype": "tcp", 00:38:34.709 "traddr": "10.0.0.2", 00:38:34.709 "adrfam": "ipv4", 00:38:34.709 "trsvcid": "4420", 00:38:34.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.709 "hdgst": false, 00:38:34.709 "ddgst": false 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 }' 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.709 "params": { 00:38:34.709 "name": "Nvme1", 00:38:34.709 "trtype": "tcp", 00:38:34.709 "traddr": "10.0.0.2", 00:38:34.709 "adrfam": "ipv4", 00:38:34.709 "trsvcid": "4420", 00:38:34.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.709 "hdgst": false, 00:38:34.709 "ddgst": false 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 }' 00:38:34.709 17:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:34.709 "params": { 00:38:34.709 "name": "Nvme1", 00:38:34.709 "trtype": "tcp", 00:38:34.709 "traddr": "10.0.0.2", 00:38:34.709 "adrfam": "ipv4", 00:38:34.709 "trsvcid": "4420", 00:38:34.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.709 "hdgst": false, 00:38:34.709 "ddgst": false 00:38:34.709 }, 00:38:34.709 "method": "bdev_nvme_attach_controller" 00:38:34.709 }' 00:38:34.709 [2024-12-06 17:05:23.223189] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:34.709 [2024-12-06 17:05:23.223243] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:38:34.709 [2024-12-06 17:05:23.224357] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:34.709 [2024-12-06 17:05:23.224405] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:38:34.709 [2024-12-06 17:05:23.224935] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:34.709 [2024-12-06 17:05:23.224979] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:34.709 [2024-12-06 17:05:23.225521] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:34.709 [2024-12-06 17:05:23.225569] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:38:34.709 [2024-12-06 17:05:23.372250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.709 [2024-12-06 17:05:23.383760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:34.971 [2024-12-06 17:05:23.421815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.971 [2024-12-06 17:05:23.433177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:34.971 [2024-12-06 17:05:23.472197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.971 [2024-12-06 17:05:23.483909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:34.971 [2024-12-06 17:05:23.524155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.971 [2024-12-06 17:05:23.536068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:34.971 Running I/O for 1 seconds... 00:38:35.232 Running I/O for 1 seconds... 00:38:35.232 Running I/O for 1 seconds... 00:38:35.232 Running I/O for 1 seconds... 00:38:36.169 179328.00 IOPS, 700.50 MiB/s 00:38:36.169 Latency(us) 00:38:36.169 [2024-12-06T16:05:24.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.169 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:36.169 Nvme1n1 : 1.00 178970.90 699.11 0.00 0.00 711.15 300.37 1979.73 00:38:36.169 [2024-12-06T16:05:24.862Z] =================================================================================================================== 00:38:36.169 [2024-12-06T16:05:24.862Z] Total : 178970.90 699.11 0.00 0.00 711.15 300.37 1979.73 00:38:36.169 14508.00 IOPS, 56.67 MiB/s [2024-12-06T16:05:24.862Z] 10978.00 IOPS, 42.88 MiB/s 00:38:36.169 Latency(us) 00:38:36.169 [2024-12-06T16:05:24.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.169 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:36.169 Nvme1n1 : 1.01 14571.05 56.92 0.00 0.00 8759.48 2293.76 11905.71 00:38:36.169 [2024-12-06T16:05:24.863Z] =================================================================================================================== 00:38:36.170 [2024-12-06T16:05:24.863Z] Total : 14571.05 56.92 0.00 0.00 8759.48 2293.76 11905.71 00:38:36.170 00:38:36.170 Latency(us) 00:38:36.170 [2024-12-06T16:05:24.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.170 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:36.170 Nvme1n1 : 1.01 11043.70 43.14 0.00 0.00 11551.92 4805.97 18459.31 00:38:36.170 [2024-12-06T16:05:24.863Z] =================================================================================================================== 00:38:36.170 [2024-12-06T16:05:24.863Z] Total : 11043.70 43.14 0.00 0.00 11551.92 4805.97 18459.31 00:38:36.170 11752.00 IOPS, 45.91 MiB/s 00:38:36.170 Latency(us) 00:38:36.170 [2024-12-06T16:05:24.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:36.170 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:36.170 Nvme1n1 : 1.01 11812.16 46.14 0.00 0.00 10803.74 4150.61 16493.23 00:38:36.170 [2024-12-06T16:05:24.863Z] =================================================================================================================== 00:38:36.170 [2024-12-06T16:05:24.863Z] Total : 11812.16 46.14 0.00 0.00 10803.74 4150.61 16493.23 00:38:36.170 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2570162 00:38:36.429 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2570163 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2570165 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:36.430 rmmod nvme_tcp 00:38:36.430 rmmod nvme_fabrics 00:38:36.430 rmmod nvme_keyring 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2570096 ']' 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2570096 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2570096 ']' 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2570096 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:36.430 17:05:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2570096 00:38:36.430 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:36.430 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:36.430 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2570096' 00:38:36.430 killing process with pid 2570096 00:38:36.430 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2570096 00:38:36.430 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2570096 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.689 17:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:38.594 00:38:38.594 real 0m10.691s 00:38:38.594 user 0m14.073s 00:38:38.594 sys 0m5.819s 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:38.594 ************************************ 00:38:38.594 END TEST nvmf_bdev_io_wait 00:38:38.594 ************************************ 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:38.594 ************************************ 00:38:38.594 START TEST nvmf_queue_depth 00:38:38.594 ************************************ 00:38:38.594 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:38.855 * Looking for test storage... 00:38:38.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:38.855 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.856 --rc genhtml_branch_coverage=1 00:38:38.856 --rc genhtml_function_coverage=1 00:38:38.856 --rc genhtml_legend=1 00:38:38.856 --rc geninfo_all_blocks=1 00:38:38.856 --rc geninfo_unexecuted_blocks=1 00:38:38.856 00:38:38.856 ' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.856 --rc genhtml_branch_coverage=1 00:38:38.856 --rc genhtml_function_coverage=1 00:38:38.856 --rc genhtml_legend=1 00:38:38.856 --rc geninfo_all_blocks=1 00:38:38.856 --rc geninfo_unexecuted_blocks=1 00:38:38.856 00:38:38.856 ' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.856 --rc genhtml_branch_coverage=1 00:38:38.856 --rc genhtml_function_coverage=1 00:38:38.856 --rc genhtml_legend=1 00:38:38.856 --rc geninfo_all_blocks=1 00:38:38.856 --rc geninfo_unexecuted_blocks=1 00:38:38.856 00:38:38.856 ' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:38.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:38.856 --rc genhtml_branch_coverage=1 00:38:38.856 --rc genhtml_function_coverage=1 00:38:38.856 --rc genhtml_legend=1 00:38:38.856 --rc geninfo_all_blocks=1 00:38:38.856 --rc geninfo_unexecuted_blocks=1 00:38:38.856 00:38:38.856 ' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:38.856 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:38.857 17:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:44.148 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:44.148 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:44.149 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:44.149 Found net devices under 0000:31:00.0: cvl_0_0 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:44.149 Found net devices under 0000:31:00.1: cvl_0_1 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:44.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:44.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:38:44.149 00:38:44.149 --- 10.0.0.2 ping statistics --- 00:38:44.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.149 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:44.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:44.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:38:44.149 00:38:44.149 --- 10.0.0.1 ping statistics --- 00:38:44.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.149 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:44.149 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2574886 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2574886 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2574886 ']' 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:44.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:44.409 17:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:44.409 [2024-12-06 17:05:32.880415] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:44.409 [2024-12-06 17:05:32.881558] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:44.409 [2024-12-06 17:05:32.881608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:44.409 [2024-12-06 17:05:32.975804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.409 [2024-12-06 17:05:33.002269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:44.409 [2024-12-06 17:05:33.002319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:44.409 [2024-12-06 17:05:33.002328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:44.409 [2024-12-06 17:05:33.002335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:44.409 [2024-12-06 17:05:33.002342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:44.409 [2024-12-06 17:05:33.003068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:44.409 [2024-12-06 17:05:33.066749] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:44.409 [2024-12-06 17:05:33.067013] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.344 [2024-12-06 17:05:33.699881] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.344 Malloc0 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.344 [2024-12-06 17:05:33.751665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2575201 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2575201 /var/tmp/bdevperf.sock 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2575201 ']' 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:45.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:45.344 [2024-12-06 17:05:33.787933] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:38:45.344 [2024-12-06 17:05:33.787981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575201 ] 00:38:45.344 [2024-12-06 17:05:33.866873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.344 [2024-12-06 17:05:33.885586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.344 17:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:45.603 NVMe0n1 00:38:45.603 17:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.603 17:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:45.603 Running I/O for 10 seconds... 00:38:47.936 8929.00 IOPS, 34.88 MiB/s [2024-12-06T16:05:37.568Z] 9087.50 IOPS, 35.50 MiB/s [2024-12-06T16:05:38.504Z] 10440.67 IOPS, 40.78 MiB/s [2024-12-06T16:05:39.440Z] 11251.25 IOPS, 43.95 MiB/s [2024-12-06T16:05:40.378Z] 11681.40 IOPS, 45.63 MiB/s [2024-12-06T16:05:41.316Z] 12030.83 IOPS, 47.00 MiB/s [2024-12-06T16:05:42.254Z] 12290.14 IOPS, 48.01 MiB/s [2024-12-06T16:05:43.631Z] 12463.38 IOPS, 48.69 MiB/s [2024-12-06T16:05:44.564Z] 12628.00 IOPS, 49.33 MiB/s [2024-12-06T16:05:44.564Z] 12721.80 IOPS, 49.69 MiB/s 00:38:55.871 Latency(us) 00:38:55.871 [2024-12-06T16:05:44.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.871 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:55.871 Verification LBA range: start 0x0 length 0x4000 00:38:55.871 NVMe0n1 : 10.04 12765.15 49.86 0.00 0.00 79941.98 8847.36 70778.88 00:38:55.871 [2024-12-06T16:05:44.564Z] =================================================================================================================== 00:38:55.871 [2024-12-06T16:05:44.564Z] Total : 12765.15 49.86 0.00 0.00 79941.98 8847.36 70778.88 00:38:55.871 { 00:38:55.871 "results": [ 00:38:55.871 { 00:38:55.871 "job": "NVMe0n1", 00:38:55.872 "core_mask": "0x1", 00:38:55.872 "workload": "verify", 00:38:55.872 "status": "finished", 00:38:55.872 "verify_range": { 00:38:55.872 "start": 0, 00:38:55.872 "length": 16384 00:38:55.872 }, 00:38:55.872 "queue_depth": 1024, 00:38:55.872 "io_size": 4096, 00:38:55.872 "runtime": 10.043672, 00:38:55.872 "iops": 12765.152028063043, 00:38:55.872 "mibps": 49.86387510962126, 00:38:55.872 "io_failed": 0, 00:38:55.872 "io_timeout": 0, 00:38:55.872 "avg_latency_us": 79941.97555637019, 00:38:55.872 "min_latency_us": 8847.36, 00:38:55.872 "max_latency_us": 70778.88 00:38:55.872 } 00:38:55.872 ], 00:38:55.872 "core_count": 1 00:38:55.872 } 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2575201 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2575201 ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2575201 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2575201 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2575201' 00:38:55.872 killing process with pid 2575201 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2575201 00:38:55.872 Received shutdown signal, test time was about 10.000000 seconds 00:38:55.872 00:38:55.872 Latency(us) 00:38:55.872 [2024-12-06T16:05:44.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.872 [2024-12-06T16:05:44.565Z] =================================================================================================================== 00:38:55.872 [2024-12-06T16:05:44.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2575201 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:55.872 rmmod nvme_tcp 00:38:55.872 rmmod nvme_fabrics 00:38:55.872 rmmod nvme_keyring 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2574886 ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2574886 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2574886 ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2574886 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2574886 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2574886' 00:38:55.872 killing process with pid 2574886 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2574886 00:38:55.872 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2574886 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.130 17:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:58.077 00:38:58.077 real 0m19.426s 00:38:58.077 user 0m22.648s 00:38:58.077 sys 0m5.343s 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:58.077 ************************************ 00:38:58.077 END TEST nvmf_queue_depth 00:38:58.077 ************************************ 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:58.077 ************************************ 00:38:58.077 START TEST nvmf_target_multipath 00:38:58.077 ************************************ 00:38:58.077 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:58.376 * Looking for test storage... 00:38:58.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:58.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.376 --rc genhtml_branch_coverage=1 00:38:58.376 --rc genhtml_function_coverage=1 00:38:58.376 --rc genhtml_legend=1 00:38:58.376 --rc geninfo_all_blocks=1 00:38:58.376 --rc geninfo_unexecuted_blocks=1 00:38:58.376 00:38:58.376 ' 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:58.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.376 --rc genhtml_branch_coverage=1 00:38:58.376 --rc genhtml_function_coverage=1 00:38:58.376 --rc genhtml_legend=1 00:38:58.376 --rc geninfo_all_blocks=1 00:38:58.376 --rc geninfo_unexecuted_blocks=1 00:38:58.376 00:38:58.376 ' 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:58.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.376 --rc genhtml_branch_coverage=1 00:38:58.376 --rc genhtml_function_coverage=1 00:38:58.376 --rc genhtml_legend=1 00:38:58.376 --rc geninfo_all_blocks=1 00:38:58.376 --rc geninfo_unexecuted_blocks=1 00:38:58.376 00:38:58.376 ' 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:58.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.376 --rc genhtml_branch_coverage=1 00:38:58.376 --rc genhtml_function_coverage=1 00:38:58.376 --rc genhtml_legend=1 00:38:58.376 --rc geninfo_all_blocks=1 00:38:58.376 --rc geninfo_unexecuted_blocks=1 00:38:58.376 00:38:58.376 ' 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.376 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:58.377 17:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:03.655 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:03.655 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:03.655 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:03.655 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:03.655 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:03.656 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:03.656 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:03.656 Found net devices under 0000:31:00.0: cvl_0_0 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:03.656 Found net devices under 0000:31:00.1: cvl_0_1 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:03.656 17:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:03.656 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:03.656 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:03.656 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:03.656 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:03.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:03.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:39:03.657 00:39:03.657 --- 10.0.0.2 ping statistics --- 00:39:03.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.657 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:03.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:03.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:39:03.657 00:39:03.657 --- 10.0.0.1 ping statistics --- 00:39:03.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:03.657 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:03.657 only one NIC for nvmf test 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.657 rmmod nvme_tcp 00:39:03.657 rmmod nvme_fabrics 00:39:03.657 rmmod nvme_keyring 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.657 17:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:06.195 00:39:06.195 real 0m7.567s 00:39:06.195 user 0m1.406s 00:39:06.195 sys 0m4.029s 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:06.195 ************************************ 00:39:06.195 END TEST nvmf_target_multipath 00:39:06.195 ************************************ 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:06.195 ************************************ 00:39:06.195 START TEST nvmf_zcopy 00:39:06.195 ************************************ 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:06.195 * Looking for test storage... 00:39:06.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.195 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:06.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.196 --rc genhtml_branch_coverage=1 00:39:06.196 --rc genhtml_function_coverage=1 00:39:06.196 --rc genhtml_legend=1 00:39:06.196 --rc geninfo_all_blocks=1 00:39:06.196 --rc geninfo_unexecuted_blocks=1 00:39:06.196 00:39:06.196 ' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:06.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.196 --rc genhtml_branch_coverage=1 00:39:06.196 --rc genhtml_function_coverage=1 00:39:06.196 --rc genhtml_legend=1 00:39:06.196 --rc geninfo_all_blocks=1 00:39:06.196 --rc geninfo_unexecuted_blocks=1 00:39:06.196 00:39:06.196 ' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:06.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.196 --rc genhtml_branch_coverage=1 00:39:06.196 --rc genhtml_function_coverage=1 00:39:06.196 --rc genhtml_legend=1 00:39:06.196 --rc geninfo_all_blocks=1 00:39:06.196 --rc geninfo_unexecuted_blocks=1 00:39:06.196 00:39:06.196 ' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:06.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.196 --rc genhtml_branch_coverage=1 00:39:06.196 --rc genhtml_function_coverage=1 00:39:06.196 --rc genhtml_legend=1 00:39:06.196 --rc geninfo_all_blocks=1 00:39:06.196 --rc geninfo_unexecuted_blocks=1 00:39:06.196 00:39:06.196 ' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:06.196 17:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:11.472 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:11.472 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:11.472 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:11.473 Found net devices under 0000:31:00.0: cvl_0_0 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:11.473 Found net devices under 0000:31:00.1: cvl_0_1 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:11.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:11.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:39:11.473 00:39:11.473 --- 10.0.0.2 ping statistics --- 00:39:11.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.473 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:11.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:11.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:39:11.473 00:39:11.473 --- 10.0.0.1 ping statistics --- 00:39:11.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:11.473 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2586016 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2586016 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2586016 ']' 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:11.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.473 17:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:11.473 [2024-12-06 17:05:59.885767] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:11.473 [2024-12-06 17:05:59.886762] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:39:11.473 [2024-12-06 17:05:59.886800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:11.473 [2024-12-06 17:05:59.969923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.473 [2024-12-06 17:05:59.987389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:11.473 [2024-12-06 17:05:59.987420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:11.473 [2024-12-06 17:05:59.987428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:11.473 [2024-12-06 17:05:59.987436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:11.473 [2024-12-06 17:05:59.987442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:11.473 [2024-12-06 17:05:59.987989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:11.473 [2024-12-06 17:06:00.038054] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:11.473 [2024-12-06 17:06:00.038300] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:11.473 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:11.473 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:11.473 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:11.473 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 [2024-12-06 17:06:00.088769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 [2024-12-06 17:06:00.104805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 malloc0 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:11.474 { 00:39:11.474 "params": { 00:39:11.474 "name": "Nvme$subsystem", 00:39:11.474 "trtype": "$TEST_TRANSPORT", 00:39:11.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:11.474 "adrfam": "ipv4", 00:39:11.474 "trsvcid": "$NVMF_PORT", 00:39:11.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:11.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:11.474 "hdgst": ${hdgst:-false}, 00:39:11.474 "ddgst": ${ddgst:-false} 00:39:11.474 }, 00:39:11.474 "method": "bdev_nvme_attach_controller" 00:39:11.474 } 00:39:11.474 EOF 00:39:11.474 )") 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:11.474 17:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:11.474 "params": { 00:39:11.474 "name": "Nvme1", 00:39:11.474 "trtype": "tcp", 00:39:11.474 "traddr": "10.0.0.2", 00:39:11.474 "adrfam": "ipv4", 00:39:11.474 "trsvcid": "4420", 00:39:11.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:11.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:11.474 "hdgst": false, 00:39:11.474 "ddgst": false 00:39:11.474 }, 00:39:11.474 "method": "bdev_nvme_attach_controller" 00:39:11.474 }' 00:39:11.734 [2024-12-06 17:06:00.169742] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:39:11.734 [2024-12-06 17:06:00.169796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586230 ] 00:39:11.734 [2024-12-06 17:06:00.246590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.734 [2024-12-06 17:06:00.266041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.999 Running I/O for 10 seconds... 00:39:14.316 6590.00 IOPS, 51.48 MiB/s [2024-12-06T16:06:03.578Z] 7395.00 IOPS, 57.77 MiB/s [2024-12-06T16:06:04.975Z] 8238.00 IOPS, 64.36 MiB/s [2024-12-06T16:06:05.912Z] 8663.75 IOPS, 67.69 MiB/s [2024-12-06T16:06:06.852Z] 8911.80 IOPS, 69.62 MiB/s [2024-12-06T16:06:07.787Z] 9083.00 IOPS, 70.96 MiB/s [2024-12-06T16:06:08.722Z] 9204.29 IOPS, 71.91 MiB/s [2024-12-06T16:06:09.661Z] 9294.62 IOPS, 72.61 MiB/s [2024-12-06T16:06:10.600Z] 9365.89 IOPS, 73.17 MiB/s [2024-12-06T16:06:10.861Z] 9422.60 IOPS, 73.61 MiB/s 00:39:22.168 Latency(us) 00:39:22.168 [2024-12-06T16:06:10.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.168 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:22.168 Verification LBA range: start 0x0 length 0x1000 00:39:22.168 Nvme1n1 : 10.05 9387.44 73.34 0.00 0.00 13544.29 2389.33 43253.76 00:39:22.168 [2024-12-06T16:06:10.861Z] =================================================================================================================== 00:39:22.168 [2024-12-06T16:06:10.861Z] Total : 9387.44 73.34 0.00 0.00 13544.29 2389.33 43253.76 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2588866 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:22.168 { 00:39:22.168 "params": { 00:39:22.168 "name": "Nvme$subsystem", 00:39:22.168 "trtype": "$TEST_TRANSPORT", 00:39:22.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:22.168 "adrfam": "ipv4", 00:39:22.168 "trsvcid": "$NVMF_PORT", 00:39:22.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:22.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:22.168 "hdgst": ${hdgst:-false}, 00:39:22.168 "ddgst": ${ddgst:-false} 00:39:22.168 }, 00:39:22.168 "method": "bdev_nvme_attach_controller" 00:39:22.168 } 00:39:22.168 EOF 00:39:22.168 )") 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:22.168 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:22.168 [2024-12-06 17:06:10.744339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.168 [2024-12-06 17:06:10.744366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:22.169 17:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:22.169 "params": { 00:39:22.169 "name": "Nvme1", 00:39:22.169 "trtype": "tcp", 00:39:22.169 "traddr": "10.0.0.2", 00:39:22.169 "adrfam": "ipv4", 00:39:22.169 "trsvcid": "4420", 00:39:22.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:22.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:22.169 "hdgst": false, 00:39:22.169 "ddgst": false 00:39:22.169 }, 00:39:22.169 "method": "bdev_nvme_attach_controller" 00:39:22.169 }' 00:39:22.169 [2024-12-06 17:06:10.752302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.752311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.760300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.760309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.768300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.768308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.769294] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:39:22.169 [2024-12-06 17:06:10.769343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588866 ] 00:39:22.169 [2024-12-06 17:06:10.776299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.776310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.788300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.788309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.796300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.796308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.804300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.804309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.812300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.812308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.820300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.820308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.828300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.828309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.832654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.169 [2024-12-06 17:06:10.836300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.836309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.844302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.844313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.848709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.169 [2024-12-06 17:06:10.852300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.852308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.169 [2024-12-06 17:06:10.860305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.169 [2024-12-06 17:06:10.860314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.868304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.868316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.876303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.876316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.884301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.884312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.892300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.892309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.900301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.900310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.908306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.908318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.916303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.916316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.924301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.924311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.932301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.932311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.940301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.940312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.948301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.948311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.956300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.956308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.964299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.430 [2024-12-06 17:06:10.964307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.430 [2024-12-06 17:06:10.972300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:10.972308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:10.980300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:10.980308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:10.988299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:10.988307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:10.996301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:10.996311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.004302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.004311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.012299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.012307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.020300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.020308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.028300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.028309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.036300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.036309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.044300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.044309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.052300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.052308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.060299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.060308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.068300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.068308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.076300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.076308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.431 [2024-12-06 17:06:11.084300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.431 [2024-12-06 17:06:11.084309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.132588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.132603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.140302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.140313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 Running I/O for 5 seconds... 00:39:22.692 [2024-12-06 17:06:11.153517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.153534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.164587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.164603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.177304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.177320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.189437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.189452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.201523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.201539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.212151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.212167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.224983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.224999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.237416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.237432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.248154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.248169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.254144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.254159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.262782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.262798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.272013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.272028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.284695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.284710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.297128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.297143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.309127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.309143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.321126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.321141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.332633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.332648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.345171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.345186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.357803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.357818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.367918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.367933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.373704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.373719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.692 [2024-12-06 17:06:11.383172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.692 [2024-12-06 17:06:11.383187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.389059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.389074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.398464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.398480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.407961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.407975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.413651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.413665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.422318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.422337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.433163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.433177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.445467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.445483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.454778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.454793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.463580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.463595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.469326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.469341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.479880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.479895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.485683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.485698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.494379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.494394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.503752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.503767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.509465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.509480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.519314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.519329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.525235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.525250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.953 [2024-12-06 17:06:11.535130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.953 [2024-12-06 17:06:11.535145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.540976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.540990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.551258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.551273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.557003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.557018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.566964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.566979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.574763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.574777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.583958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.583976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.589853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.589868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.600108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.600123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.605993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.606008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.614726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.614741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.623670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.623685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.629599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.629614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.639410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.639425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:22.954 [2024-12-06 17:06:11.644976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:22.954 [2024-12-06 17:06:11.644990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.656591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.656606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.669365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.669380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.680142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.680158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.692918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.692933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.705245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.705259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.717323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.717338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.729527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.729542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.740498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.740514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.746354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.746369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.754845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.754860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.764028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.764046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.769891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.769906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.229 [2024-12-06 17:06:11.778630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.229 [2024-12-06 17:06:11.778645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.787999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.788014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.793893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.793908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.802577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.802592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.812053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.812068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.817680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.817695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.827138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.827153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.832910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.832925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.843190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.843205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.848993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.849008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.858972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.858987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.864864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.864878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.875401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.875416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.881218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.881232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.891160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.891175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.896901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.896915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.906869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.906883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.230 [2024-12-06 17:06:11.916058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.230 [2024-12-06 17:06:11.916077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.928667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.928682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.941154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.941176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.953063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.953078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.965531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.965546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.976407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.976423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.982206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.982220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.991797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.991812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:11.997598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:11.997612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.007832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.007847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.020365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.020381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.026765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.026780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.035824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.035839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.041681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.041696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.052030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.052045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.064720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.064735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.077767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.077781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.087787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.087802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.093652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.093667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.103187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.103204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.108953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.108968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.118884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.118899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.124847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.124862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.135087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.135107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.140885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.140900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 19419.00 IOPS, 151.71 MiB/s [2024-12-06T16:06:12.184Z] [2024-12-06 17:06:12.151055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.151071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.156876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.156890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.166950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.166965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.175719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.175734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.491 [2024-12-06 17:06:12.181451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.491 [2024-12-06 17:06:12.181467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.191554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.191570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.197371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.197386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.207217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.207232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.212995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.213009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.222813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.222828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.231523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.231538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.237279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.237294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.247423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.247438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.253231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.253252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.263507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.263521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.269330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.269344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.279414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.279429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.285291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.285306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.752 [2024-12-06 17:06:12.295585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.752 [2024-12-06 17:06:12.295600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.301525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.301541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.311927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.311942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.317614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.317629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.327282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.327298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.333182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.333197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.343288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.343304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.349170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.349185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.359058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.359074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.367740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.367755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.373490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.373505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.382879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.382894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.391555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.391569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.397494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.397509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.406886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.406902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.415596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.415611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.421439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.421454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.431490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.431505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:23.753 [2024-12-06 17:06:12.437271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:23.753 [2024-12-06 17:06:12.437286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.446486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.446502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.457086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.457105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.469182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.469197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.481228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.481243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.493266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.493281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.505137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.505152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.517574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.517590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.528414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.528429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.534325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.534340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.543057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.543072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.551609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.551624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.557314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.557328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.566810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.566825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.575976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.575996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.588656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.588671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.601556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.601571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.612223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.612238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.617934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.617949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.631919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.631935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.644806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.644821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.656592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.656607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.669343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.669358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.681398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.681414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.693406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.693421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.014 [2024-12-06 17:06:12.705145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.014 [2024-12-06 17:06:12.705160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.717461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.717476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.728331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.728348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.734336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.734351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.742819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.742833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.751494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.751509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.757406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.757421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.766816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.766831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.775621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.775640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.781478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.781492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.790869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.790883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.799481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.799496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.805243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.805258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.815366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.815381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.821199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.821214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.831260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.831274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.836969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.274 [2024-12-06 17:06:12.836983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.274 [2024-12-06 17:06:12.846640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.846655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.856057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.856072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.868475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.868490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.874768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.874782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.884039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.884054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.896669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.896683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.909108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.909123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.921225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.921240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.933463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.933477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.945279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.945294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.275 [2024-12-06 17:06:12.957614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.275 [2024-12-06 17:06:12.957634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:12.968314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:12.968329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:12.974163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:12.974178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:12.983562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:12.983577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:12.989453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:12.989467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:12.999887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:12.999903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.005842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.005856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.015486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.015501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.021628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.021642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.030532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.030546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.040132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.040147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.052793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.052808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.065555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.065570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.076266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.076281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.081982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.081997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.091120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.091134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.097248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.097263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.108452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.108467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.114352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.114367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.123126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.123145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.128941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.128955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.139152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.139167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.145001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.145016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 19448.00 IOPS, 151.94 MiB/s [2024-12-06T16:06:13.228Z] [2024-12-06 17:06:13.154650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.154665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.163749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.163763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.169501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.169516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.179166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.179181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.184904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.184919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.195014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.195029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.200843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.200857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.210927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.210942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.219508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.219522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.535 [2024-12-06 17:06:13.225331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.535 [2024-12-06 17:06:13.225345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.234853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.234868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.243531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.243546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.249313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.249327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.259435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.259450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.265216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.265230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.275432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.275446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.281260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.281275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.291261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.291276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.297211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.297226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.306984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.307000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.315768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.315782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.321301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.321315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.331295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.331310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.337152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.337167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.347114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.347129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.352981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.352996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.363187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.363202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.369119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.369134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.378742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.378757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.388168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.388183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.400658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.400673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.413365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.413380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.425589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.796 [2024-12-06 17:06:13.425604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.796 [2024-12-06 17:06:13.437229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.797 [2024-12-06 17:06:13.437244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.797 [2024-12-06 17:06:13.449051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.797 [2024-12-06 17:06:13.449065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.797 [2024-12-06 17:06:13.460815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.797 [2024-12-06 17:06:13.460830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.797 [2024-12-06 17:06:13.473223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.797 [2024-12-06 17:06:13.473238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:24.797 [2024-12-06 17:06:13.485243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:24.797 [2024-12-06 17:06:13.485258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.497805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.497821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.507980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.507995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.513715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.513729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.523383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.523398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.529093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.529112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.538981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.538996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.546350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.546365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.556067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.556082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.568163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.568178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.580814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.580829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.593363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.593378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.605195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.605218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.617491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.617507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.628076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.628091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.633969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.633988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.643733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.643748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.649434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.649449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.659420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.659435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.665097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.665116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.675211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.675226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.681104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.681120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.691476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.691492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.697192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.697206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.707168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.707183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.713092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.713113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.723392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.723408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.729063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.729078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.739562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.739577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.058 [2024-12-06 17:06:13.745333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.058 [2024-12-06 17:06:13.745348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.319 [2024-12-06 17:06:13.754766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.319 [2024-12-06 17:06:13.754782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.319 [2024-12-06 17:06:13.763502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.319 [2024-12-06 17:06:13.763518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.319 [2024-12-06 17:06:13.769301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.319 [2024-12-06 17:06:13.769316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.319 [2024-12-06 17:06:13.778861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.319 [2024-12-06 17:06:13.778875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.787519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.787538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.793186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.793200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.803462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.803477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.809242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.809256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.819618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.819634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.825528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.825544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.834941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.834956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.843798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.843813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.849492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.849507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.859800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.859816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.865493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.865508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.875044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.875059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.880841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.880857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.891181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.891197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.896960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.896975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.906967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.906982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.915601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.915616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.921458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.921472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.931503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.931518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.937203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.937221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.947373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.947388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.953055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.953070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.962857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.962872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.971618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.971633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.977268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.977283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.987237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.987252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:13.992939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:13.992955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:14.002969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:14.002984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.320 [2024-12-06 17:06:14.011697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.320 [2024-12-06 17:06:14.011711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.017353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.017368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.027811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.027826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.033712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.033727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.043730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.043745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.049771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.049786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.059064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.059079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.067657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.067672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.073225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.073240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.082900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.082916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.091579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.091598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.581 [2024-12-06 17:06:14.097482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.581 [2024-12-06 17:06:14.097497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.107216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.107232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.113069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.113085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.122590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.122605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.131842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.131858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.137608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.137623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.147826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.147841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.153673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.153687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 19471.33 IOPS, 152.12 MiB/s [2024-12-06T16:06:14.275Z] [2024-12-06 17:06:14.163480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.163495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.169257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.169272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.179403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.179418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.185069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.185084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.194911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.194927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.203701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.203716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.209562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.209577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.219178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.219194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.224998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.225014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.234855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.234870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.244003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.244019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.249833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.249848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.259204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.259219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.582 [2024-12-06 17:06:14.265112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.582 [2024-12-06 17:06:14.265126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.274472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.274487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.283959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.283974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.289689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.289704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.299276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.299291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.304995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.305010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.315562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.315577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.321471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.321487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.331716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.331731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.337391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.337406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.346738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.346752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.356112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.356127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.368721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.368737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.381228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.381243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.393164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.393180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.405624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.405639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.416012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.416027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.421870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.421884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.430478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.430493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.439847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.439862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.445464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.445479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.455354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.455368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.461086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.461106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.471338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.471353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.477153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.477168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.487202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.487216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.492915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.492929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.502662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.502677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.512030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.512044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.517984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.517998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.527267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.527282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:25.843 [2024-12-06 17:06:14.533141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:25.843 [2024-12-06 17:06:14.533156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.542592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.542607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.552146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.552160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.558051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.558066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.566654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.566669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.575538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.575553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.581396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.581410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.591703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.591718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.597583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.597598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.607653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.607669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.613536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.613550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.622896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.622911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.631517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.631532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.637317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.637332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.647090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.647108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.655033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.655048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.662414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.662428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.672142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.672157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.678036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.678050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.688114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.688129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.700736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.700751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.713182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.713197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.724667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.724686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.737359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.737374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.749309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.749324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.761384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.761399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.773448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.773463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.784334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.784349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.104 [2024-12-06 17:06:14.790702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.104 [2024-12-06 17:06:14.790718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.364 [2024-12-06 17:06:14.798307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.364 [2024-12-06 17:06:14.798322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.364 [2024-12-06 17:06:14.809129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.364 [2024-12-06 17:06:14.809144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.364 [2024-12-06 17:06:14.820941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.364 [2024-12-06 17:06:14.820956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.364 [2024-12-06 17:06:14.833143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.364 [2024-12-06 17:06:14.833157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.364 [2024-12-06 17:06:14.845149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.364 [2024-12-06 17:06:14.845164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.364 [2024-12-06 17:06:14.857115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.364 [2024-12-06 17:06:14.857130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.364 [2024-12-06 17:06:14.869439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.869454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.880975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.880989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.892627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.892641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.905318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.905333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.917332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.917347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.929298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.929313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.941414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.941432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.952944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.952959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.965511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.965526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.976232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.976248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.982075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.982090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:14.990562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:14.990577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.000092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.000112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.005880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.005895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.015525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.015539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.021231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.021245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.030893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.030909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.039484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.039499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.045204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.045218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.365 [2024-12-06 17:06:15.054720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.365 [2024-12-06 17:06:15.054735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.064062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.064077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.076616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.076631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.088791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.088806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.101506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.101521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.113343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.113359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.124112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.124132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.130089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.130108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.138862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.138877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.147587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.147602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 19489.50 IOPS, 152.26 MiB/s [2024-12-06T16:06:15.338Z] [2024-12-06 17:06:15.160483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.160498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.166878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.166893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.174712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.174727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.645 [2024-12-06 17:06:15.183954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.645 [2024-12-06 17:06:15.183969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.189800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.189814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.198578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.198593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.208138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.208153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.213927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.213942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.223592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.223608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.229356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.229371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.239270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.239286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.244999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.245013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.255307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.255322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.261209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.261224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.271128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.271143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.277110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.277125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.286427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.286442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.297059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.297075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.309196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.309212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.321236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.321252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.646 [2024-12-06 17:06:15.333351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.646 [2024-12-06 17:06:15.333366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.345649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.345664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.356390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.356405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.362158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.362173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.370689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.370704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.380176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.380192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.392778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.392793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.405404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.405420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.417078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.417093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.429075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.429090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.441079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.441093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.453123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.453139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.465155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.465170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.477222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.477237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.489721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.489737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.500334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.500349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.506026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.506041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.515255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.515270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.521261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.521275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.531354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.531369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.537076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.537091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.547131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.908 [2024-12-06 17:06:15.547147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.908 [2024-12-06 17:06:15.554338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.909 [2024-12-06 17:06:15.554354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.909 [2024-12-06 17:06:15.564175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.909 [2024-12-06 17:06:15.564191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.909 [2024-12-06 17:06:15.577091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.909 [2024-12-06 17:06:15.577110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.909 [2024-12-06 17:06:15.589678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.909 [2024-12-06 17:06:15.589693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:26.909 [2024-12-06 17:06:15.600159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:26.909 [2024-12-06 17:06:15.600174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.612793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.612809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.624896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.624911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.636952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.636966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.649326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.649341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.661660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.661675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.672448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.672464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.678268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.678284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.687091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.687111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.695006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.695021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.703498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.703513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.709251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.709266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.719165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.719180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.724969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.724983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.735335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.735350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.741193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.741208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.750570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.750585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.760089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.760110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.772830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.168 [2024-12-06 17:06:15.772846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.168 [2024-12-06 17:06:15.785333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.785349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.169 [2024-12-06 17:06:15.796175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.796191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.169 [2024-12-06 17:06:15.809177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.809192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.169 [2024-12-06 17:06:15.821028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.821043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.169 [2024-12-06 17:06:15.833457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.833472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.169 [2024-12-06 17:06:15.844212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.844227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.169 [2024-12-06 17:06:15.850097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.850116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.169 [2024-12-06 17:06:15.859245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.169 [2024-12-06 17:06:15.859260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.864964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.864979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.875666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.875681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.881427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.881442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.890995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.891010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.899709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.899724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.905326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.905340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.915283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.915298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.921419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.921433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.931235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.931250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.936958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.936972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.946803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.946819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.956112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.956127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.968690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.968704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.981105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.428 [2024-12-06 17:06:15.981120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.428 [2024-12-06 17:06:15.993296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:15.993312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.005693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.005709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.015771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.015787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.021337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.021358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.030799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.030814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.039601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.039616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.045564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.045579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.055614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.055630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.061439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.061454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.070898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.070913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.079543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.079558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.085197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.085211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.095087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.095105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.100949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.100964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.111049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.111063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.429 [2024-12-06 17:06:16.119654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.429 [2024-12-06 17:06:16.119669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.689 [2024-12-06 17:06:16.125248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.689 [2024-12-06 17:06:16.125263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.689 [2024-12-06 17:06:16.135542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.689 [2024-12-06 17:06:16.135557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.689 [2024-12-06 17:06:16.141289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.689 [2024-12-06 17:06:16.141304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.689 [2024-12-06 17:06:16.151584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.689 [2024-12-06 17:06:16.151599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.689 [2024-12-06 17:06:16.157296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.689 [2024-12-06 17:06:16.157311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.689 19503.20 IOPS, 152.37 MiB/s [2024-12-06T16:06:16.382Z] [2024-12-06 17:06:16.164307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.689 [2024-12-06 17:06:16.164322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.689 00:39:27.689 Latency(us) 00:39:27.689 [2024-12-06T16:06:16.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.689 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:27.689 Nvme1n1 : 5.01 19504.59 152.38 0.00 0.00 6557.16 2607.79 11141.12 00:39:27.689 [2024-12-06T16:06:16.382Z] =================================================================================================================== 00:39:27.689 [2024-12-06T16:06:16.382Z] Total : 19504.59 152.38 0.00 0.00 6557.16 2607.79 11141.12 00:39:27.689 [2024-12-06 17:06:16.172303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.689 [2024-12-06 17:06:16.172315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.180305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.180316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.188307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.188316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.196304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.196314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.204304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.204314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.212304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.212315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.220302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.220310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.228301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.228309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.236300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.236308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.244303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.244312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.252301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.252310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 [2024-12-06 17:06:16.260301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:27.690 [2024-12-06 17:06:16.260309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:27.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2588866) - No such process 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2588866 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:27.690 delay0 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:27.690 17:06:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:27.951 [2024-12-06 17:06:16.412293] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:34.531 Initializing NVMe Controllers 00:39:34.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:34.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:34.531 Initialization complete. Launching workers. 00:39:34.531 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1411 00:39:34.531 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1698, failed to submit 33 00:39:34.531 success 1530, unsuccessful 168, failed 0 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:34.531 rmmod nvme_tcp 00:39:34.531 rmmod nvme_fabrics 00:39:34.531 rmmod nvme_keyring 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2586016 ']' 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2586016 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2586016 ']' 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2586016 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586016 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586016' 00:39:34.531 killing process with pid 2586016 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2586016 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2586016 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:34.531 17:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.438 17:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:36.439 00:39:36.439 real 0m30.611s 00:39:36.439 user 0m42.082s 00:39:36.439 sys 0m9.612s 00:39:36.439 17:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.439 17:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:36.439 ************************************ 00:39:36.439 END TEST nvmf_zcopy 00:39:36.439 ************************************ 00:39:36.439 17:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:36.439 17:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:36.439 17:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.439 17:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:36.439 ************************************ 00:39:36.439 START TEST nvmf_nmic 00:39:36.439 ************************************ 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:36.439 * Looking for test storage... 00:39:36.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.439 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:36.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.698 --rc genhtml_branch_coverage=1 00:39:36.698 --rc genhtml_function_coverage=1 00:39:36.698 --rc genhtml_legend=1 00:39:36.698 --rc geninfo_all_blocks=1 00:39:36.698 --rc geninfo_unexecuted_blocks=1 00:39:36.698 00:39:36.698 ' 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:36.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.698 --rc genhtml_branch_coverage=1 00:39:36.698 --rc genhtml_function_coverage=1 00:39:36.698 --rc genhtml_legend=1 00:39:36.698 --rc geninfo_all_blocks=1 00:39:36.698 --rc geninfo_unexecuted_blocks=1 00:39:36.698 00:39:36.698 ' 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:36.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.698 --rc genhtml_branch_coverage=1 00:39:36.698 --rc genhtml_function_coverage=1 00:39:36.698 --rc genhtml_legend=1 00:39:36.698 --rc geninfo_all_blocks=1 00:39:36.698 --rc geninfo_unexecuted_blocks=1 00:39:36.698 00:39:36.698 ' 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:36.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.698 --rc genhtml_branch_coverage=1 00:39:36.698 --rc genhtml_function_coverage=1 00:39:36.698 --rc genhtml_legend=1 00:39:36.698 --rc geninfo_all_blocks=1 00:39:36.698 --rc geninfo_unexecuted_blocks=1 00:39:36.698 00:39:36.698 ' 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.698 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:36.699 17:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:41.979 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:41.980 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:41.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:41.980 Found net devices under 0000:31:00.0: cvl_0_0 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:41.980 Found net devices under 0000:31:00.1: cvl_0_1 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:41.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:41.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:39:41.980 00:39:41.980 --- 10.0.0.2 ping statistics --- 00:39:41.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.980 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:41.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:41.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:39:41.980 00:39:41.980 --- 10.0.0.1 ping statistics --- 00:39:41.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:41.980 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2595748 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2595748 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2595748 ']' 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:41.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:41.980 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:41.980 [2024-12-06 17:06:30.528753] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:41.980 [2024-12-06 17:06:30.529738] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:39:41.981 [2024-12-06 17:06:30.529776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:41.981 [2024-12-06 17:06:30.612890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:41.981 [2024-12-06 17:06:30.632525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:41.981 [2024-12-06 17:06:30.632561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:41.981 [2024-12-06 17:06:30.632570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:41.981 [2024-12-06 17:06:30.632576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:41.981 [2024-12-06 17:06:30.632582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:41.981 [2024-12-06 17:06:30.634053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:41.981 [2024-12-06 17:06:30.634208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:41.981 [2024-12-06 17:06:30.634251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:41.981 [2024-12-06 17:06:30.634253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:42.241 [2024-12-06 17:06:30.684098] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:42.241 [2024-12-06 17:06:30.684392] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:42.241 [2024-12-06 17:06:30.685409] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:42.241 [2024-12-06 17:06:30.685482] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:42.242 [2024-12-06 17:06:30.685655] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 [2024-12-06 17:06:30.731164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 Malloc0 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 [2024-12-06 17:06:30.791000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:42.242 test case1: single bdev can't be used in multiple subsystems 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 [2024-12-06 17:06:30.814778] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:42.242 [2024-12-06 17:06:30.814798] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:42.242 [2024-12-06 17:06:30.814806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.242 request: 00:39:42.242 { 00:39:42.242 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:42.242 "namespace": { 00:39:42.242 "bdev_name": "Malloc0", 00:39:42.242 "no_auto_visible": false, 00:39:42.242 "hide_metadata": false 00:39:42.242 }, 00:39:42.242 "method": "nvmf_subsystem_add_ns", 00:39:42.242 "req_id": 1 00:39:42.242 } 00:39:42.242 Got JSON-RPC error response 00:39:42.242 response: 00:39:42.242 { 00:39:42.242 "code": -32602, 00:39:42.242 "message": "Invalid parameters" 00:39:42.242 } 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:42.242 Adding namespace failed - expected result. 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:42.242 test case2: host connect to nvmf target in multiple paths 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:42.242 [2024-12-06 17:06:30.822890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:42.242 17:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:42.502 17:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:43.072 17:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:43.072 17:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:43.072 17:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:43.072 17:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:43.072 17:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:44.983 17:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:44.983 17:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:44.983 17:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:44.983 17:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:44.983 17:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:44.983 17:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:44.983 17:06:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:44.983 [global] 00:39:44.983 thread=1 00:39:44.983 invalidate=1 00:39:44.983 rw=write 00:39:44.983 time_based=1 00:39:44.983 runtime=1 00:39:44.983 ioengine=libaio 00:39:44.983 direct=1 00:39:44.983 bs=4096 00:39:44.983 iodepth=1 00:39:44.983 norandommap=0 00:39:44.983 numjobs=1 00:39:44.983 00:39:44.983 verify_dump=1 00:39:44.983 verify_backlog=512 00:39:44.983 verify_state_save=0 00:39:44.983 do_verify=1 00:39:44.983 verify=crc32c-intel 00:39:44.983 [job0] 00:39:44.983 filename=/dev/nvme0n1 00:39:44.983 Could not set queue depth (nvme0n1) 00:39:45.242 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:45.242 fio-3.35 00:39:45.242 Starting 1 thread 00:39:46.621 00:39:46.621 job0: (groupid=0, jobs=1): err= 0: pid=2596619: Fri Dec 6 17:06:35 2024 00:39:46.621 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:39:46.621 slat (nsec): min=2754, max=33082, avg=26619.50, stdev=6762.55 00:39:46.621 clat (usec): min=520, max=43068, avg=39965.73, stdev=9291.36 00:39:46.621 lat (usec): min=533, max=43101, avg=39992.35, stdev=9294.77 00:39:46.621 clat percentiles (usec): 00:39:46.621 | 1.00th=[ 523], 5.00th=[ 523], 10.00th=[41681], 20.00th=[41681], 00:39:46.621 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:39:46.621 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[43254], 00:39:46.621 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:39:46.621 | 99.99th=[43254] 00:39:46.621 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:39:46.621 slat (nsec): min=3411, max=46459, avg=14698.10, stdev=8859.66 00:39:46.621 clat (usec): min=150, max=741, avg=391.60, stdev=118.58 00:39:46.621 lat (usec): min=154, max=784, avg=406.30, stdev=122.86 00:39:46.621 clat percentiles (usec): 00:39:46.621 | 1.00th=[ 172], 5.00th=[ 204], 10.00th=[ 233], 20.00th=[ 285], 00:39:46.621 | 30.00th=[ 330], 40.00th=[ 367], 50.00th=[ 383], 60.00th=[ 408], 00:39:46.621 | 70.00th=[ 449], 80.00th=[ 494], 90.00th=[ 553], 95.00th=[ 603], 00:39:46.621 | 99.00th=[ 676], 99.50th=[ 709], 99.90th=[ 742], 99.95th=[ 742], 00:39:46.621 | 99.99th=[ 742] 00:39:46.621 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:46.621 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:46.621 lat (usec) : 250=13.53%, 500=64.66%, 750=18.23% 00:39:46.621 lat (msec) : 50=3.57% 00:39:46.621 cpu : usr=0.59%, sys=0.89%, ctx=534, majf=0, minf=1 00:39:46.621 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:46.621 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.621 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.621 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.621 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:46.621 00:39:46.621 Run status group 0 (all jobs): 00:39:46.621 READ: bw=79.2KiB/s (81.1kB/s), 79.2KiB/s-79.2KiB/s (81.1kB/s-81.1kB/s), io=80.0KiB (81.9kB), run=1010-1010msec 00:39:46.622 WRITE: bw=2028KiB/s (2076kB/s), 2028KiB/s-2028KiB/s (2076kB/s-2076kB/s), io=2048KiB (2097kB), run=1010-1010msec 00:39:46.622 00:39:46.622 Disk stats (read/write): 00:39:46.622 nvme0n1: ios=59/512, merge=0/0, ticks=998/164, in_queue=1162, util=96.89% 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:46.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:46.622 rmmod nvme_tcp 00:39:46.622 rmmod nvme_fabrics 00:39:46.622 rmmod nvme_keyring 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2595748 ']' 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2595748 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2595748 ']' 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2595748 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:46.622 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595748 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595748' 00:39:46.881 killing process with pid 2595748 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2595748 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2595748 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:46.881 17:06:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:49.415 00:39:49.415 real 0m12.505s 00:39:49.415 user 0m30.184s 00:39:49.415 sys 0m5.502s 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:49.415 ************************************ 00:39:49.415 END TEST nvmf_nmic 00:39:49.415 ************************************ 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:49.415 ************************************ 00:39:49.415 START TEST nvmf_fio_target 00:39:49.415 ************************************ 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:49.415 * Looking for test storage... 00:39:49.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:49.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.415 --rc genhtml_branch_coverage=1 00:39:49.415 --rc genhtml_function_coverage=1 00:39:49.415 --rc genhtml_legend=1 00:39:49.415 --rc geninfo_all_blocks=1 00:39:49.415 --rc geninfo_unexecuted_blocks=1 00:39:49.415 00:39:49.415 ' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:49.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.415 --rc genhtml_branch_coverage=1 00:39:49.415 --rc genhtml_function_coverage=1 00:39:49.415 --rc genhtml_legend=1 00:39:49.415 --rc geninfo_all_blocks=1 00:39:49.415 --rc geninfo_unexecuted_blocks=1 00:39:49.415 00:39:49.415 ' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:49.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.415 --rc genhtml_branch_coverage=1 00:39:49.415 --rc genhtml_function_coverage=1 00:39:49.415 --rc genhtml_legend=1 00:39:49.415 --rc geninfo_all_blocks=1 00:39:49.415 --rc geninfo_unexecuted_blocks=1 00:39:49.415 00:39:49.415 ' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:49.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.415 --rc genhtml_branch_coverage=1 00:39:49.415 --rc genhtml_function_coverage=1 00:39:49.415 --rc genhtml_legend=1 00:39:49.415 --rc geninfo_all_blocks=1 00:39:49.415 --rc geninfo_unexecuted_blocks=1 00:39:49.415 00:39:49.415 ' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:49.415 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:49.416 17:06:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:54.697 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:39:54.698 Found 0000:31:00.0 (0x8086 - 0x159b) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:39:54.698 Found 0000:31:00.1 (0x8086 - 0x159b) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:39:54.698 Found net devices under 0000:31:00.0: cvl_0_0 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:39:54.698 Found net devices under 0000:31:00.1: cvl_0_1 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:54.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:54.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:39:54.698 00:39:54.698 --- 10.0.0.2 ping statistics --- 00:39:54.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.698 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:54.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:54.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:39:54.698 00:39:54.698 --- 10.0.0.1 ping statistics --- 00:39:54.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:54.698 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:54.698 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2601288 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2601288 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2601288 ']' 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:54.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:54.699 17:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:54.699 [2024-12-06 17:06:43.018829] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:54.699 [2024-12-06 17:06:43.019998] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:39:54.699 [2024-12-06 17:06:43.020052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:54.699 [2024-12-06 17:06:43.111903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:54.699 [2024-12-06 17:06:43.135063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:54.699 [2024-12-06 17:06:43.135108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:54.699 [2024-12-06 17:06:43.135116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:54.699 [2024-12-06 17:06:43.135122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:54.699 [2024-12-06 17:06:43.135128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:54.699 [2024-12-06 17:06:43.136717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:54.699 [2024-12-06 17:06:43.136875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:54.699 [2024-12-06 17:06:43.137034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.699 [2024-12-06 17:06:43.137034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:54.699 [2024-12-06 17:06:43.188313] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:54.699 [2024-12-06 17:06:43.188980] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:54.699 [2024-12-06 17:06:43.189447] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:54.699 [2024-12-06 17:06:43.189688] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:54.699 [2024-12-06 17:06:43.189694] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:55.357 17:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:55.357 17:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:55.357 17:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:55.357 17:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:55.357 17:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:55.358 17:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:55.358 17:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:55.358 [2024-12-06 17:06:43.973828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:55.358 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:55.635 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:55.635 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:55.895 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:55.896 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:55.896 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:55.896 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:56.156 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:56.156 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:56.416 17:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:56.416 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:56.416 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:56.674 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:56.674 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:56.932 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:56.932 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:56.932 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:57.191 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:57.191 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:57.448 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:57.448 17:06:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:57.448 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.706 [2024-12-06 17:06:46.213652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.706 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:57.706 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:57.964 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:58.222 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:58.222 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:58.222 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:58.222 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:58.222 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:58.222 17:06:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:00.757 17:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:00.757 17:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:00.757 17:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:00.757 17:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:00.757 17:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:00.757 17:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:00.757 17:06:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:00.757 [global] 00:40:00.757 thread=1 00:40:00.757 invalidate=1 00:40:00.757 rw=write 00:40:00.757 time_based=1 00:40:00.757 runtime=1 00:40:00.757 ioengine=libaio 00:40:00.757 direct=1 00:40:00.757 bs=4096 00:40:00.757 iodepth=1 00:40:00.757 norandommap=0 00:40:00.757 numjobs=1 00:40:00.757 00:40:00.757 verify_dump=1 00:40:00.757 verify_backlog=512 00:40:00.757 verify_state_save=0 00:40:00.757 do_verify=1 00:40:00.757 verify=crc32c-intel 00:40:00.757 [job0] 00:40:00.757 filename=/dev/nvme0n1 00:40:00.757 [job1] 00:40:00.757 filename=/dev/nvme0n2 00:40:00.757 [job2] 00:40:00.757 filename=/dev/nvme0n3 00:40:00.757 [job3] 00:40:00.757 filename=/dev/nvme0n4 00:40:00.757 Could not set queue depth (nvme0n1) 00:40:00.757 Could not set queue depth (nvme0n2) 00:40:00.757 Could not set queue depth (nvme0n3) 00:40:00.757 Could not set queue depth (nvme0n4) 00:40:00.757 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:00.757 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:00.757 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:00.757 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:00.757 fio-3.35 00:40:00.757 Starting 4 threads 00:40:02.159 00:40:02.159 job0: (groupid=0, jobs=1): err= 0: pid=2602870: Fri Dec 6 17:06:50 2024 00:40:02.159 read: IOPS=563, BW=2254KiB/s (2308kB/s)(2256KiB/1001msec) 00:40:02.159 slat (nsec): min=2990, max=29900, avg=13643.61, stdev=4981.31 00:40:02.159 clat (usec): min=498, max=1233, avg=849.94, stdev=134.19 00:40:02.159 lat (usec): min=519, max=1249, avg=863.59, stdev=134.80 00:40:02.159 clat percentiles (usec): 00:40:02.159 | 1.00th=[ 553], 5.00th=[ 652], 10.00th=[ 693], 20.00th=[ 742], 00:40:02.159 | 30.00th=[ 775], 40.00th=[ 816], 50.00th=[ 840], 60.00th=[ 873], 00:40:02.159 | 70.00th=[ 906], 80.00th=[ 947], 90.00th=[ 1037], 95.00th=[ 1123], 00:40:02.159 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:40:02.159 | 99.99th=[ 1237] 00:40:02.159 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:40:02.159 slat (nsec): min=3321, max=43597, avg=11838.35, stdev=3304.84 00:40:02.159 clat (usec): min=167, max=1247, avg=484.15, stdev=149.98 00:40:02.159 lat (usec): min=181, max=1259, avg=495.99, stdev=150.22 00:40:02.159 clat percentiles (usec): 00:40:02.159 | 1.00th=[ 225], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 330], 00:40:02.159 | 30.00th=[ 396], 40.00th=[ 433], 50.00th=[ 478], 60.00th=[ 537], 00:40:02.159 | 70.00th=[ 570], 80.00th=[ 619], 90.00th=[ 693], 95.00th=[ 734], 00:40:02.159 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 898], 99.95th=[ 1254], 00:40:02.159 | 99.99th=[ 1254] 00:40:02.159 bw ( KiB/s): min= 4096, max= 4096, per=37.30%, avg=4096.00, stdev= 0.00, samples=1 00:40:02.159 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:02.159 lat (usec) : 250=1.76%, 500=32.43%, 750=36.34%, 1000=24.50% 00:40:02.159 lat (msec) : 2=4.97% 00:40:02.159 cpu : usr=1.20%, sys=1.50%, ctx=1588, majf=0, minf=2 00:40:02.159 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.159 issued rwts: total=564,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.159 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.159 job1: (groupid=0, jobs=1): err= 0: pid=2602871: Fri Dec 6 17:06:50 2024 00:40:02.159 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:40:02.159 slat (nsec): min=3160, max=44539, avg=14866.53, stdev=5516.51 00:40:02.159 clat (usec): min=254, max=41502, avg=976.83, stdev=1806.90 00:40:02.159 lat (usec): min=258, max=41506, avg=991.70, stdev=1806.72 00:40:02.159 clat percentiles (usec): 00:40:02.159 | 1.00th=[ 416], 5.00th=[ 515], 10.00th=[ 578], 20.00th=[ 685], 00:40:02.159 | 30.00th=[ 799], 40.00th=[ 865], 50.00th=[ 955], 60.00th=[ 1012], 00:40:02.159 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1139], 95.00th=[ 1188], 00:40:02.159 | 99.00th=[ 1254], 99.50th=[ 1270], 99.90th=[41681], 99.95th=[41681], 00:40:02.159 | 99.99th=[41681] 00:40:02.159 write: IOPS=790, BW=3161KiB/s (3237kB/s)(3164KiB/1001msec); 0 zone resets 00:40:02.160 slat (nsec): min=4176, max=72988, avg=13395.20, stdev=4811.68 00:40:02.160 clat (usec): min=173, max=1071, avg=602.71, stdev=124.68 00:40:02.160 lat (usec): min=177, max=1085, avg=616.10, stdev=126.34 00:40:02.160 clat percentiles (usec): 00:40:02.160 | 1.00th=[ 289], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 498], 00:40:02.160 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 644], 00:40:02.160 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 783], 00:40:02.160 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 1074], 99.95th=[ 1074], 00:40:02.160 | 99.99th=[ 1074] 00:40:02.160 bw ( KiB/s): min= 4087, max= 4087, per=37.21%, avg=4087.00, stdev= 0.00, samples=1 00:40:02.160 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:40:02.160 lat (usec) : 250=0.15%, 500=13.89%, 750=49.58%, 1000=19.72% 00:40:02.160 lat (msec) : 2=16.58%, 50=0.08% 00:40:02.160 cpu : usr=0.70%, sys=1.80%, ctx=1307, majf=0, minf=1 00:40:02.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.160 issued rwts: total=512,791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.160 job2: (groupid=0, jobs=1): err= 0: pid=2602872: Fri Dec 6 17:06:50 2024 00:40:02.160 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1019msec) 00:40:02.160 slat (nsec): min=11108, max=27123, avg=24817.78, stdev=4947.05 00:40:02.160 clat (usec): min=1007, max=42098, avg=39423.78, stdev=9596.04 00:40:02.160 lat (usec): min=1018, max=42125, avg=39448.60, stdev=9599.50 00:40:02.160 clat percentiles (usec): 00:40:02.160 | 1.00th=[ 1004], 5.00th=[ 1004], 10.00th=[41157], 20.00th=[41157], 00:40:02.160 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:40:02.160 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:02.160 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:02.160 | 99.99th=[42206] 00:40:02.160 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:40:02.160 slat (nsec): min=4213, max=41364, avg=13221.47, stdev=4074.72 00:40:02.160 clat (usec): min=281, max=946, avg=585.62, stdev=125.19 00:40:02.160 lat (usec): min=293, max=951, avg=598.84, stdev=126.13 00:40:02.160 clat percentiles (usec): 00:40:02.160 | 1.00th=[ 310], 5.00th=[ 383], 10.00th=[ 420], 20.00th=[ 478], 00:40:02.160 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 611], 00:40:02.160 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 791], 00:40:02.160 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 947], 99.95th=[ 947], 00:40:02.160 | 99.99th=[ 947] 00:40:02.160 bw ( KiB/s): min= 4096, max= 4096, per=37.30%, avg=4096.00, stdev= 0.00, samples=1 00:40:02.160 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:02.160 lat (usec) : 500=23.77%, 750=63.02%, 1000=9.81% 00:40:02.160 lat (msec) : 2=0.19%, 50=3.21% 00:40:02.160 cpu : usr=0.39%, sys=0.49%, ctx=531, majf=0, minf=1 00:40:02.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.160 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.160 job3: (groupid=0, jobs=1): err= 0: pid=2602873: Fri Dec 6 17:06:50 2024 00:40:02.160 read: IOPS=288, BW=1153KiB/s (1180kB/s)(1192KiB/1034msec) 00:40:02.160 slat (nsec): min=11122, max=59590, avg=17826.73, stdev=6457.32 00:40:02.160 clat (usec): min=764, max=41996, avg=2408.23, stdev=7338.30 00:40:02.160 lat (usec): min=779, max=42023, avg=2426.06, stdev=7339.05 00:40:02.160 clat percentiles (usec): 00:40:02.160 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 971], 00:40:02.160 | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:40:02.160 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1188], 95.00th=[ 1254], 00:40:02.160 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:02.160 | 99.99th=[42206] 00:40:02.160 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:40:02.160 slat (nsec): min=4279, max=41422, avg=13523.76, stdev=5307.84 00:40:02.160 clat (usec): min=211, max=1036, avg=585.72, stdev=146.47 00:40:02.160 lat (usec): min=219, max=1054, avg=599.24, stdev=148.42 00:40:02.160 clat percentiles (usec): 00:40:02.160 | 1.00th=[ 269], 5.00th=[ 322], 10.00th=[ 388], 20.00th=[ 461], 00:40:02.160 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:40:02.160 | 70.00th=[ 668], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 799], 00:40:02.160 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 1037], 99.95th=[ 1037], 00:40:02.160 | 99.99th=[ 1037] 00:40:02.160 bw ( KiB/s): min= 4096, max= 4096, per=37.30%, avg=4096.00, stdev= 0.00, samples=1 00:40:02.160 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:02.160 lat (usec) : 250=0.25%, 500=16.54%, 750=37.65%, 1000=18.52% 00:40:02.160 lat (msec) : 2=25.80%, 50=1.23% 00:40:02.160 cpu : usr=0.48%, sys=1.16%, ctx=812, majf=0, minf=1 00:40:02.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:02.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:02.160 issued rwts: total=298,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:02.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:02.160 00:40:02.160 Run status group 0 (all jobs): 00:40:02.160 READ: bw=5385KiB/s (5514kB/s), 70.7KiB/s-2254KiB/s (72.4kB/s-2308kB/s), io=5568KiB (5702kB), run=1001-1034msec 00:40:02.160 WRITE: bw=10.7MiB/s (11.2MB/s), 1981KiB/s-4092KiB/s (2028kB/s-4190kB/s), io=11.1MiB (11.6MB), run=1001-1034msec 00:40:02.160 00:40:02.160 Disk stats (read/write): 00:40:02.160 nvme0n1: ios=562/842, merge=0/0, ticks=486/372, in_queue=858, util=87.07% 00:40:02.160 nvme0n2: ios=560/512, merge=0/0, ticks=1374/305, in_queue=1679, util=87.84% 00:40:02.160 nvme0n3: ios=36/512, merge=0/0, ticks=1379/300, in_queue=1679, util=91.96% 00:40:02.160 nvme0n4: ios=347/512, merge=0/0, ticks=683/292, in_queue=975, util=95.82% 00:40:02.160 17:06:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:02.160 [global] 00:40:02.160 thread=1 00:40:02.160 invalidate=1 00:40:02.160 rw=randwrite 00:40:02.160 time_based=1 00:40:02.160 runtime=1 00:40:02.160 ioengine=libaio 00:40:02.160 direct=1 00:40:02.160 bs=4096 00:40:02.160 iodepth=1 00:40:02.160 norandommap=0 00:40:02.160 numjobs=1 00:40:02.160 00:40:02.160 verify_dump=1 00:40:02.160 verify_backlog=512 00:40:02.160 verify_state_save=0 00:40:02.160 do_verify=1 00:40:02.160 verify=crc32c-intel 00:40:02.160 [job0] 00:40:02.160 filename=/dev/nvme0n1 00:40:02.160 [job1] 00:40:02.160 filename=/dev/nvme0n2 00:40:02.160 [job2] 00:40:02.160 filename=/dev/nvme0n3 00:40:02.160 [job3] 00:40:02.160 filename=/dev/nvme0n4 00:40:02.160 Could not set queue depth (nvme0n1) 00:40:02.160 Could not set queue depth (nvme0n2) 00:40:02.160 Could not set queue depth (nvme0n3) 00:40:02.160 Could not set queue depth (nvme0n4) 00:40:02.421 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.421 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.421 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.421 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:02.421 fio-3.35 00:40:02.421 Starting 4 threads 00:40:03.802 00:40:03.802 job0: (groupid=0, jobs=1): err= 0: pid=2603391: Fri Dec 6 17:06:52 2024 00:40:03.802 read: IOPS=425, BW=1700KiB/s (1741kB/s)(1748KiB/1028msec) 00:40:03.802 slat (nsec): min=3302, max=46711, avg=18826.12, stdev=3920.62 00:40:03.802 clat (usec): min=842, max=41801, avg=1626.44, stdev=4707.62 00:40:03.802 lat (usec): min=857, max=41821, avg=1645.26, stdev=4708.07 00:40:03.802 clat percentiles (usec): 00:40:03.802 | 1.00th=[ 881], 5.00th=[ 906], 10.00th=[ 938], 20.00th=[ 979], 00:40:03.802 | 30.00th=[ 1012], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1106], 00:40:03.802 | 70.00th=[ 1123], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1270], 00:40:03.802 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:03.802 | 99.99th=[41681] 00:40:03.802 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:40:03.802 slat (nsec): min=3493, max=88756, avg=12742.84, stdev=5659.92 00:40:03.802 clat (usec): min=194, max=949, avg=581.65, stdev=120.49 00:40:03.802 lat (usec): min=198, max=963, avg=594.39, stdev=122.34 00:40:03.802 clat percentiles (usec): 00:40:03.802 | 1.00th=[ 297], 5.00th=[ 379], 10.00th=[ 429], 20.00th=[ 478], 00:40:03.802 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:40:03.802 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 775], 00:40:03.802 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 947], 00:40:03.802 | 99.99th=[ 947] 00:40:03.802 bw ( KiB/s): min= 4096, max= 4096, per=39.27%, avg=4096.00, stdev= 0.00, samples=1 00:40:03.802 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:03.802 lat (usec) : 250=0.21%, 500=12.33%, 750=37.62%, 1000=16.65% 00:40:03.802 lat (msec) : 2=32.56%, 50=0.63% 00:40:03.802 cpu : usr=0.88%, sys=2.43%, ctx=950, majf=0, minf=1 00:40:03.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.802 issued rwts: total=437,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:03.802 job1: (groupid=0, jobs=1): err= 0: pid=2603395: Fri Dec 6 17:06:52 2024 00:40:03.802 read: IOPS=242, BW=972KiB/s (995kB/s)(1000KiB/1029msec) 00:40:03.802 slat (nsec): min=3590, max=46675, avg=17560.46, stdev=5155.49 00:40:03.802 clat (usec): min=775, max=42066, avg=2940.38, stdev=8718.64 00:40:03.802 lat (usec): min=790, max=42088, avg=2957.94, stdev=8719.57 00:40:03.802 clat percentiles (usec): 00:40:03.802 | 1.00th=[ 807], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 930], 00:40:03.802 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 1004], 00:40:03.802 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1237], 00:40:03.802 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:03.802 | 99.99th=[42206] 00:40:03.802 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:40:03.802 slat (nsec): min=3391, max=37533, avg=11651.52, stdev=5478.39 00:40:03.802 clat (usec): min=147, max=995, avg=547.74, stdev=142.07 00:40:03.802 lat (usec): min=161, max=1009, avg=559.39, stdev=144.85 00:40:03.802 clat percentiles (usec): 00:40:03.802 | 1.00th=[ 243], 5.00th=[ 318], 10.00th=[ 359], 20.00th=[ 420], 00:40:03.802 | 30.00th=[ 461], 40.00th=[ 510], 50.00th=[ 553], 60.00th=[ 594], 00:40:03.802 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 725], 95.00th=[ 775], 00:40:03.802 | 99.00th=[ 848], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 996], 00:40:03.802 | 99.99th=[ 996] 00:40:03.802 bw ( KiB/s): min= 4096, max= 4096, per=39.27%, avg=4096.00, stdev= 0.00, samples=1 00:40:03.802 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:03.802 lat (usec) : 250=1.05%, 500=24.67%, 750=36.61%, 1000=23.23% 00:40:03.802 lat (msec) : 2=12.86%, 50=1.57% 00:40:03.802 cpu : usr=0.00%, sys=2.33%, ctx=765, majf=0, minf=1 00:40:03.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.802 issued rwts: total=250,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:03.802 job2: (groupid=0, jobs=1): err= 0: pid=2603396: Fri Dec 6 17:06:52 2024 00:40:03.802 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:40:03.802 slat (nsec): min=3609, max=38396, avg=17350.31, stdev=3634.23 00:40:03.802 clat (usec): min=587, max=1438, avg=1177.50, stdev=132.70 00:40:03.802 lat (usec): min=603, max=1455, avg=1194.85, stdev=133.50 00:40:03.802 clat percentiles (usec): 00:40:03.802 | 1.00th=[ 775], 5.00th=[ 922], 10.00th=[ 988], 20.00th=[ 1074], 00:40:03.802 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1205], 60.00th=[ 1237], 00:40:03.802 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1319], 95.00th=[ 1369], 00:40:03.802 | 99.00th=[ 1434], 99.50th=[ 1434], 99.90th=[ 1434], 99.95th=[ 1434], 00:40:03.802 | 99.99th=[ 1434] 00:40:03.802 write: IOPS=634, BW=2537KiB/s (2598kB/s)(2540KiB/1001msec); 0 zone resets 00:40:03.802 slat (nsec): min=4021, max=42916, avg=12069.98, stdev=3867.16 00:40:03.802 clat (usec): min=99, max=979, avg=592.64, stdev=147.59 00:40:03.802 lat (usec): min=112, max=1012, avg=604.71, stdev=148.73 00:40:03.802 clat percentiles (usec): 00:40:03.802 | 1.00th=[ 233], 5.00th=[ 330], 10.00th=[ 408], 20.00th=[ 469], 00:40:03.802 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:40:03.802 | 70.00th=[ 668], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 824], 00:40:03.802 | 99.00th=[ 873], 99.50th=[ 971], 99.90th=[ 979], 99.95th=[ 979], 00:40:03.803 | 99.99th=[ 979] 00:40:03.803 bw ( KiB/s): min= 4096, max= 4096, per=39.27%, avg=4096.00, stdev= 0.00, samples=1 00:40:03.803 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:03.803 lat (usec) : 100=0.09%, 250=0.52%, 500=14.04%, 750=32.87%, 1000=12.38% 00:40:03.803 lat (msec) : 2=40.10% 00:40:03.803 cpu : usr=0.70%, sys=1.60%, ctx=1148, majf=0, minf=2 00:40:03.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.803 issued rwts: total=512,635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:03.803 job3: (groupid=0, jobs=1): err= 0: pid=2603397: Fri Dec 6 17:06:52 2024 00:40:03.803 read: IOPS=612, BW=2450KiB/s (2508kB/s)(2452KiB/1001msec) 00:40:03.803 slat (nsec): min=3251, max=22281, avg=11544.85, stdev=2467.34 00:40:03.803 clat (usec): min=231, max=1103, avg=765.74, stdev=131.89 00:40:03.803 lat (usec): min=243, max=1114, avg=777.28, stdev=132.18 00:40:03.803 clat percentiles (usec): 00:40:03.803 | 1.00th=[ 412], 5.00th=[ 537], 10.00th=[ 586], 20.00th=[ 652], 00:40:03.803 | 30.00th=[ 701], 40.00th=[ 742], 50.00th=[ 783], 60.00th=[ 824], 00:40:03.803 | 70.00th=[ 848], 80.00th=[ 881], 90.00th=[ 922], 95.00th=[ 955], 00:40:03.803 | 99.00th=[ 1012], 99.50th=[ 1037], 99.90th=[ 1106], 99.95th=[ 1106], 00:40:03.803 | 99.99th=[ 1106] 00:40:03.803 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:40:03.803 slat (nsec): min=4146, max=40174, avg=13799.61, stdev=3162.50 00:40:03.803 clat (usec): min=106, max=854, avg=492.14, stdev=126.97 00:40:03.803 lat (usec): min=111, max=868, avg=505.94, stdev=127.65 00:40:03.803 clat percentiles (usec): 00:40:03.803 | 1.00th=[ 202], 5.00th=[ 277], 10.00th=[ 330], 20.00th=[ 375], 00:40:03.803 | 30.00th=[ 433], 40.00th=[ 469], 50.00th=[ 498], 60.00th=[ 523], 00:40:03.803 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 652], 95.00th=[ 693], 00:40:03.803 | 99.00th=[ 766], 99.50th=[ 783], 99.90th=[ 832], 99.95th=[ 857], 00:40:03.803 | 99.99th=[ 857] 00:40:03.803 bw ( KiB/s): min= 4096, max= 4096, per=39.27%, avg=4096.00, stdev= 0.00, samples=1 00:40:03.803 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:03.803 lat (usec) : 250=2.57%, 500=30.48%, 750=44.47%, 1000=21.93% 00:40:03.803 lat (msec) : 2=0.55% 00:40:03.803 cpu : usr=1.20%, sys=1.60%, ctx=1639, majf=0, minf=1 00:40:03.803 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:03.803 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.803 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:03.803 issued rwts: total=613,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:03.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:03.803 00:40:03.803 Run status group 0 (all jobs): 00:40:03.803 READ: bw=7044KiB/s (7213kB/s), 972KiB/s-2450KiB/s (995kB/s-2508kB/s), io=7248KiB (7422kB), run=1001-1029msec 00:40:03.803 WRITE: bw=10.2MiB/s (10.7MB/s), 1990KiB/s-4092KiB/s (2038kB/s-4190kB/s), io=10.5MiB (11.0MB), run=1001-1029msec 00:40:03.803 00:40:03.803 Disk stats (read/write): 00:40:03.803 nvme0n1: ios=336/512, merge=0/0, ticks=1444/232, in_queue=1676, util=96.09% 00:40:03.803 nvme0n2: ios=257/512, merge=0/0, ticks=1589/233, in_queue=1822, util=100.00% 00:40:03.803 nvme0n3: ios=477/512, merge=0/0, ticks=802/294, in_queue=1096, util=95.47% 00:40:03.803 nvme0n4: ios=569/860, merge=0/0, ticks=992/398, in_queue=1390, util=96.91% 00:40:03.803 17:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:03.803 [global] 00:40:03.803 thread=1 00:40:03.803 invalidate=1 00:40:03.803 rw=write 00:40:03.803 time_based=1 00:40:03.803 runtime=1 00:40:03.803 ioengine=libaio 00:40:03.803 direct=1 00:40:03.803 bs=4096 00:40:03.803 iodepth=128 00:40:03.803 norandommap=0 00:40:03.803 numjobs=1 00:40:03.803 00:40:03.803 verify_dump=1 00:40:03.803 verify_backlog=512 00:40:03.803 verify_state_save=0 00:40:03.803 do_verify=1 00:40:03.803 verify=crc32c-intel 00:40:03.803 [job0] 00:40:03.803 filename=/dev/nvme0n1 00:40:03.803 [job1] 00:40:03.803 filename=/dev/nvme0n2 00:40:03.803 [job2] 00:40:03.803 filename=/dev/nvme0n3 00:40:03.803 [job3] 00:40:03.803 filename=/dev/nvme0n4 00:40:03.803 Could not set queue depth (nvme0n1) 00:40:03.803 Could not set queue depth (nvme0n2) 00:40:03.803 Could not set queue depth (nvme0n3) 00:40:03.803 Could not set queue depth (nvme0n4) 00:40:03.803 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:03.803 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:03.803 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:03.803 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:03.803 fio-3.35 00:40:03.803 Starting 4 threads 00:40:05.185 00:40:05.185 job0: (groupid=0, jobs=1): err= 0: pid=2603916: Fri Dec 6 17:06:53 2024 00:40:05.185 read: IOPS=5413, BW=21.1MiB/s (22.2MB/s)(21.2MiB/1004msec) 00:40:05.185 slat (nsec): min=974, max=14796k, avg=90433.27, stdev=692743.17 00:40:05.185 clat (usec): min=2319, max=48022, avg=12043.67, stdev=7601.37 00:40:05.185 lat (usec): min=2329, max=52442, avg=12134.11, stdev=7664.67 00:40:05.185 clat percentiles (usec): 00:40:05.185 | 1.00th=[ 4080], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6456], 00:40:05.185 | 30.00th=[ 6980], 40.00th=[ 7898], 50.00th=[ 8717], 60.00th=[10028], 00:40:05.185 | 70.00th=[13304], 80.00th=[18220], 90.00th=[24249], 95.00th=[27395], 00:40:05.185 | 99.00th=[35390], 99.50th=[40109], 99.90th=[47973], 99.95th=[47973], 00:40:05.185 | 99.99th=[47973] 00:40:05.185 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:40:05.185 slat (nsec): min=1629, max=15922k, avg=84175.85, stdev=624740.67 00:40:05.185 clat (usec): min=884, max=52717, avg=10955.24, stdev=8526.90 00:40:05.185 lat (usec): min=893, max=52721, avg=11039.42, stdev=8595.33 00:40:05.185 clat percentiles (usec): 00:40:05.185 | 1.00th=[ 2311], 5.00th=[ 3687], 10.00th=[ 4146], 20.00th=[ 5407], 00:40:05.185 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 8717], 00:40:05.185 | 70.00th=[12125], 80.00th=[17433], 90.00th=[21627], 95.00th=[28705], 00:40:05.185 | 99.00th=[43254], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:40:05.185 | 99.99th=[52691] 00:40:05.185 bw ( KiB/s): min=12640, max=32416, per=25.18%, avg=22528.00, stdev=13983.74, samples=2 00:40:05.186 iops : min= 3160, max= 8104, avg=5632.00, stdev=3495.94, samples=2 00:40:05.186 lat (usec) : 1000=0.15% 00:40:05.186 lat (msec) : 2=0.28%, 4=4.01%, 10=56.39%, 20=23.80%, 50=15.13% 00:40:05.186 lat (msec) : 100=0.23% 00:40:05.186 cpu : usr=3.39%, sys=4.79%, ctx=458, majf=0, minf=1 00:40:05.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:40:05.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.186 issued rwts: total=5435,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.186 job1: (groupid=0, jobs=1): err= 0: pid=2603917: Fri Dec 6 17:06:53 2024 00:40:05.186 read: IOPS=6123, BW=23.9MiB/s (25.1MB/s)(24.1MiB/1007msec) 00:40:05.186 slat (nsec): min=904, max=18567k, avg=75392.02, stdev=703571.62 00:40:05.186 clat (usec): min=2644, max=48615, avg=10462.21, stdev=7548.76 00:40:05.186 lat (usec): min=2648, max=48633, avg=10537.60, stdev=7593.82 00:40:05.186 clat percentiles (usec): 00:40:05.186 | 1.00th=[ 3851], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 6325], 00:40:05.186 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7963], 60.00th=[ 8717], 00:40:05.186 | 70.00th=[ 9896], 80.00th=[11338], 90.00th=[19006], 95.00th=[27919], 00:40:05.186 | 99.00th=[42206], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:40:05.186 | 99.99th=[48497] 00:40:05.186 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:40:05.186 slat (nsec): min=1559, max=13957k, avg=66771.00, stdev=487866.22 00:40:05.186 clat (usec): min=1305, max=48085, avg=9504.93, stdev=6987.13 00:40:05.186 lat (usec): min=1316, max=48097, avg=9571.70, stdev=7028.41 00:40:05.186 clat percentiles (usec): 00:40:05.186 | 1.00th=[ 2245], 5.00th=[ 3884], 10.00th=[ 4359], 20.00th=[ 5014], 00:40:05.186 | 30.00th=[ 5538], 40.00th=[ 6063], 50.00th=[ 6980], 60.00th=[ 8160], 00:40:05.186 | 70.00th=[10945], 80.00th=[12387], 90.00th=[17171], 95.00th=[24773], 00:40:05.186 | 99.00th=[40109], 99.50th=[40633], 99.90th=[47973], 99.95th=[47973], 00:40:05.186 | 99.99th=[47973] 00:40:05.186 bw ( KiB/s): min=24576, max=27832, per=29.28%, avg=26204.00, stdev=2302.34, samples=2 00:40:05.186 iops : min= 6144, max= 6958, avg=6551.00, stdev=575.58, samples=2 00:40:05.186 lat (msec) : 2=0.44%, 4=3.72%, 10=66.32%, 20=22.08%, 50=7.43% 00:40:05.186 cpu : usr=3.28%, sys=4.37%, ctx=437, majf=0, minf=2 00:40:05.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:40:05.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.186 issued rwts: total=6166,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.186 job2: (groupid=0, jobs=1): err= 0: pid=2603919: Fri Dec 6 17:06:53 2024 00:40:05.186 read: IOPS=6351, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1004msec) 00:40:05.186 slat (nsec): min=971, max=9281.7k, avg=74516.06, stdev=570364.07 00:40:05.186 clat (usec): min=2755, max=27755, avg=9582.08, stdev=3329.12 00:40:05.186 lat (usec): min=2757, max=27757, avg=9656.60, stdev=3373.97 00:40:05.186 clat percentiles (usec): 00:40:05.186 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7242], 00:40:05.186 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[ 9241], 00:40:05.186 | 70.00th=[ 9896], 80.00th=[11338], 90.00th=[13829], 95.00th=[16057], 00:40:05.186 | 99.00th=[24511], 99.50th=[25822], 99.90th=[26870], 99.95th=[27657], 00:40:05.186 | 99.99th=[27657] 00:40:05.186 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:40:05.186 slat (nsec): min=1663, max=29082k, avg=72902.53, stdev=597684.35 00:40:05.186 clat (usec): min=1512, max=39044, avg=9334.16, stdev=4317.48 00:40:05.186 lat (usec): min=1523, max=39049, avg=9407.06, stdev=4358.88 00:40:05.186 clat percentiles (usec): 00:40:05.186 | 1.00th=[ 3261], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 6521], 00:40:05.186 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 9372], 00:40:05.186 | 70.00th=[10159], 80.00th=[11863], 90.00th=[12649], 95.00th=[18744], 00:40:05.186 | 99.00th=[23725], 99.50th=[24511], 99.90th=[39060], 99.95th=[39060], 00:40:05.186 | 99.99th=[39060] 00:40:05.186 bw ( KiB/s): min=26392, max=26856, per=29.75%, avg=26624.00, stdev=328.10, samples=2 00:40:05.186 iops : min= 6598, max= 6714, avg=6656.00, stdev=82.02, samples=2 00:40:05.186 lat (msec) : 2=0.12%, 4=1.53%, 10=68.83%, 20=26.24%, 50=3.28% 00:40:05.186 cpu : usr=3.69%, sys=4.29%, ctx=405, majf=0, minf=2 00:40:05.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:40:05.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.186 issued rwts: total=6377,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.186 job3: (groupid=0, jobs=1): err= 0: pid=2603920: Fri Dec 6 17:06:53 2024 00:40:05.186 read: IOPS=3282, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1004msec) 00:40:05.186 slat (nsec): min=914, max=42685k, avg=160445.32, stdev=1217340.21 00:40:05.186 clat (usec): min=1976, max=55593, avg=19751.08, stdev=12866.57 00:40:05.186 lat (usec): min=5100, max=55600, avg=19911.53, stdev=12907.18 00:40:05.186 clat percentiles (usec): 00:40:05.186 | 1.00th=[ 5276], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 8848], 00:40:05.186 | 30.00th=[10945], 40.00th=[12256], 50.00th=[13960], 60.00th=[17695], 00:40:05.186 | 70.00th=[23462], 80.00th=[32113], 90.00th=[40633], 95.00th=[48497], 00:40:05.186 | 99.00th=[52167], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:40:05.186 | 99.99th=[55837] 00:40:05.186 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:40:05.186 slat (nsec): min=1550, max=10189k, avg=127289.95, stdev=586379.62 00:40:05.186 clat (usec): min=3473, max=56814, avg=17264.85, stdev=12758.03 00:40:05.186 lat (usec): min=3484, max=56822, avg=17392.14, stdev=12835.64 00:40:05.186 clat percentiles (usec): 00:40:05.186 | 1.00th=[ 3654], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6652], 00:40:05.186 | 30.00th=[ 7832], 40.00th=[11863], 50.00th=[12780], 60.00th=[15664], 00:40:05.186 | 70.00th=[17433], 80.00th=[27395], 90.00th=[39584], 95.00th=[47449], 00:40:05.186 | 99.00th=[52691], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:40:05.186 | 99.99th=[56886] 00:40:05.186 bw ( KiB/s): min=13528, max=15144, per=16.02%, avg=14336.00, stdev=1142.68, samples=2 00:40:05.186 iops : min= 3382, max= 3786, avg=3584.00, stdev=285.67, samples=2 00:40:05.186 lat (msec) : 2=0.01%, 4=0.77%, 10=29.72%, 20=40.03%, 50=26.50% 00:40:05.186 lat (msec) : 100=2.97% 00:40:05.186 cpu : usr=1.89%, sys=2.29%, ctx=534, majf=0, minf=1 00:40:05.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:40:05.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:05.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:05.186 issued rwts: total=3296,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:05.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:05.186 00:40:05.186 Run status group 0 (all jobs): 00:40:05.186 READ: bw=82.5MiB/s (86.5MB/s), 12.8MiB/s-24.8MiB/s (13.4MB/s-26.0MB/s), io=83.1MiB (87.1MB), run=1004-1007msec 00:40:05.186 WRITE: bw=87.4MiB/s (91.6MB/s), 13.9MiB/s-25.9MiB/s (14.6MB/s-27.2MB/s), io=88.0MiB (92.3MB), run=1004-1007msec 00:40:05.186 00:40:05.186 Disk stats (read/write): 00:40:05.186 nvme0n1: ios=4776/5120, merge=0/0, ticks=34629/33611, in_queue=68240, util=91.48% 00:40:05.186 nvme0n2: ios=5170/5198, merge=0/0, ticks=42124/37768, in_queue=79892, util=94.70% 00:40:05.186 nvme0n3: ios=5172/5632, merge=0/0, ticks=47314/49935, in_queue=97249, util=98.52% 00:40:05.186 nvme0n4: ios=2621/3072, merge=0/0, ticks=14304/13772, in_queue=28076, util=93.06% 00:40:05.186 17:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:05.186 [global] 00:40:05.186 thread=1 00:40:05.186 invalidate=1 00:40:05.186 rw=randwrite 00:40:05.186 time_based=1 00:40:05.186 runtime=1 00:40:05.186 ioengine=libaio 00:40:05.186 direct=1 00:40:05.186 bs=4096 00:40:05.186 iodepth=128 00:40:05.186 norandommap=0 00:40:05.186 numjobs=1 00:40:05.186 00:40:05.186 verify_dump=1 00:40:05.186 verify_backlog=512 00:40:05.186 verify_state_save=0 00:40:05.186 do_verify=1 00:40:05.186 verify=crc32c-intel 00:40:05.186 [job0] 00:40:05.186 filename=/dev/nvme0n1 00:40:05.186 [job1] 00:40:05.186 filename=/dev/nvme0n2 00:40:05.186 [job2] 00:40:05.186 filename=/dev/nvme0n3 00:40:05.186 [job3] 00:40:05.186 filename=/dev/nvme0n4 00:40:05.186 Could not set queue depth (nvme0n1) 00:40:05.186 Could not set queue depth (nvme0n2) 00:40:05.186 Could not set queue depth (nvme0n3) 00:40:05.186 Could not set queue depth (nvme0n4) 00:40:05.446 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.446 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.446 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.446 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:05.446 fio-3.35 00:40:05.446 Starting 4 threads 00:40:06.828 00:40:06.828 job0: (groupid=0, jobs=1): err= 0: pid=2604444: Fri Dec 6 17:06:55 2024 00:40:06.828 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:40:06.828 slat (nsec): min=942, max=16760k, avg=84054.09, stdev=672542.57 00:40:06.828 clat (usec): min=3465, max=32358, avg=10747.70, stdev=4282.04 00:40:06.828 lat (usec): min=3469, max=32363, avg=10831.75, stdev=4324.98 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 5473], 5.00th=[ 6915], 10.00th=[ 7439], 20.00th=[ 7898], 00:40:06.828 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[10159], 00:40:06.828 | 70.00th=[11076], 80.00th=[12780], 90.00th=[15139], 95.00th=[18744], 00:40:06.828 | 99.00th=[29492], 99.50th=[32113], 99.90th=[32375], 99.95th=[32375], 00:40:06.828 | 99.99th=[32375] 00:40:06.828 write: IOPS=6434, BW=25.1MiB/s (26.4MB/s)(25.3MiB/1008msec); 0 zone resets 00:40:06.828 slat (nsec): min=1543, max=8568.2k, avg=71067.82, stdev=478614.49 00:40:06.828 clat (usec): min=675, max=28522, avg=9539.85, stdev=3924.76 00:40:06.828 lat (usec): min=682, max=28524, avg=9610.92, stdev=3954.01 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 3064], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 6718], 00:40:06.828 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 8586], 60.00th=[ 9503], 00:40:06.828 | 70.00th=[10290], 80.00th=[12518], 90.00th=[14877], 95.00th=[16909], 00:40:06.828 | 99.00th=[22414], 99.50th=[24249], 99.90th=[28181], 99.95th=[28181], 00:40:06.828 | 99.99th=[28443] 00:40:06.828 bw ( KiB/s): min=23728, max=27144, per=24.43%, avg=25436.00, stdev=2415.48, samples=2 00:40:06.828 iops : min= 5932, max= 6786, avg=6359.00, stdev=603.87, samples=2 00:40:06.828 lat (usec) : 750=0.02% 00:40:06.828 lat (msec) : 2=0.07%, 4=1.01%, 10=61.62%, 20=34.24%, 50=3.06% 00:40:06.828 cpu : usr=3.18%, sys=3.38%, ctx=485, majf=0, minf=1 00:40:06.828 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:06.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.828 issued rwts: total=6144,6486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.828 job1: (groupid=0, jobs=1): err= 0: pid=2604445: Fri Dec 6 17:06:55 2024 00:40:06.828 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:06.828 slat (nsec): min=928, max=16721k, avg=105994.79, stdev=871921.88 00:40:06.828 clat (usec): min=2187, max=48671, avg=14672.76, stdev=9925.53 00:40:06.828 lat (usec): min=2195, max=53918, avg=14778.76, stdev=10004.56 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 3621], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6980], 00:40:06.828 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 9503], 60.00th=[12780], 00:40:06.828 | 70.00th=[20055], 80.00th=[23462], 90.00th=[30016], 95.00th=[36439], 00:40:06.828 | 99.00th=[40633], 99.50th=[44827], 99.90th=[47449], 99.95th=[48497], 00:40:06.828 | 99.99th=[48497] 00:40:06.828 write: IOPS=4610, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec); 0 zone resets 00:40:06.828 slat (nsec): min=1498, max=15768k, avg=93485.18, stdev=757431.89 00:40:06.828 clat (usec): min=461, max=57860, avg=12902.51, stdev=9157.93 00:40:06.828 lat (usec): min=466, max=57864, avg=12996.00, stdev=9239.63 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 1237], 5.00th=[ 2966], 10.00th=[ 4293], 20.00th=[ 5538], 00:40:06.828 | 30.00th=[ 6783], 40.00th=[ 7963], 50.00th=[ 9896], 60.00th=[13173], 00:40:06.828 | 70.00th=[16450], 80.00th=[20055], 90.00th=[24511], 95.00th=[26084], 00:40:06.828 | 99.00th=[52691], 99.50th=[54264], 99.90th=[57410], 99.95th=[57410], 00:40:06.828 | 99.99th=[57934] 00:40:06.828 bw ( KiB/s): min=16384, max=20480, per=17.70%, avg=18432.00, stdev=2896.31, samples=2 00:40:06.828 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:40:06.828 lat (usec) : 500=0.03%, 750=0.01%, 1000=0.05% 00:40:06.828 lat (msec) : 2=1.31%, 4=3.65%, 10=46.79%, 20=22.19%, 50=25.31% 00:40:06.828 lat (msec) : 100=0.64% 00:40:06.828 cpu : usr=2.50%, sys=2.79%, ctx=350, majf=0, minf=1 00:40:06.828 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:06.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.828 issued rwts: total=4608,4624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.828 job2: (groupid=0, jobs=1): err= 0: pid=2604446: Fri Dec 6 17:06:55 2024 00:40:06.828 read: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec) 00:40:06.828 slat (nsec): min=921, max=4772.5k, avg=72067.78, stdev=429705.82 00:40:06.828 clat (usec): min=5142, max=14921, avg=9086.12, stdev=1306.95 00:40:06.828 lat (usec): min=5144, max=14973, avg=9158.19, stdev=1345.31 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 5800], 5.00th=[ 6849], 10.00th=[ 7308], 20.00th=[ 8160], 00:40:06.828 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9372], 00:40:06.828 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10683], 95.00th=[11207], 00:40:06.828 | 99.00th=[12125], 99.50th=[12780], 99.90th=[13829], 99.95th=[14484], 00:40:06.828 | 99.99th=[14877] 00:40:06.828 write: IOPS=7079, BW=27.7MiB/s (29.0MB/s)(27.8MiB/1004msec); 0 zone resets 00:40:06.828 slat (nsec): min=1519, max=8462.6k, avg=70375.73, stdev=385504.33 00:40:06.828 clat (usec): min=837, max=66621, avg=9407.17, stdev=6678.91 00:40:06.828 lat (usec): min=907, max=66628, avg=9477.54, stdev=6716.49 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 2868], 5.00th=[ 5211], 10.00th=[ 6849], 20.00th=[ 7701], 00:40:06.828 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8356], 00:40:06.828 | 70.00th=[ 8717], 80.00th=[ 9241], 90.00th=[10552], 95.00th=[15664], 00:40:06.828 | 99.00th=[49021], 99.50th=[59507], 99.90th=[65799], 99.95th=[66847], 00:40:06.828 | 99.99th=[66847] 00:40:06.828 bw ( KiB/s): min=27712, max=28128, per=26.81%, avg=27920.00, stdev=294.16, samples=2 00:40:06.828 iops : min= 6928, max= 7032, avg=6980.00, stdev=73.54, samples=2 00:40:06.828 lat (usec) : 1000=0.03% 00:40:06.828 lat (msec) : 2=0.25%, 4=0.57%, 10=82.08%, 20=15.36%, 50=1.18% 00:40:06.828 lat (msec) : 100=0.52% 00:40:06.828 cpu : usr=1.99%, sys=3.39%, ctx=876, majf=0, minf=1 00:40:06.828 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:40:06.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.828 issued rwts: total=6656,7108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.828 job3: (groupid=0, jobs=1): err= 0: pid=2604447: Fri Dec 6 17:06:55 2024 00:40:06.828 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1007msec) 00:40:06.828 slat (nsec): min=951, max=8228.6k, avg=70070.23, stdev=575560.92 00:40:06.828 clat (usec): min=2531, max=17152, avg=8616.43, stdev=2151.70 00:40:06.828 lat (usec): min=2534, max=20760, avg=8686.50, stdev=2203.44 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 4490], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 7177], 00:40:06.828 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:40:06.828 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[12256], 95.00th=[13042], 00:40:06.828 | 99.00th=[14484], 99.50th=[14877], 99.90th=[17171], 99.95th=[17171], 00:40:06.828 | 99.99th=[17171] 00:40:06.828 write: IOPS=7966, BW=31.1MiB/s (32.6MB/s)(31.3MiB/1007msec); 0 zone resets 00:40:06.828 slat (nsec): min=1659, max=7397.7k, avg=54640.28, stdev=368700.56 00:40:06.828 clat (usec): min=1130, max=15819, avg=7656.61, stdev=1790.12 00:40:06.828 lat (usec): min=1141, max=15824, avg=7711.25, stdev=1802.19 00:40:06.828 clat percentiles (usec): 00:40:06.828 | 1.00th=[ 3032], 5.00th=[ 4752], 10.00th=[ 5342], 20.00th=[ 6325], 00:40:06.828 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 7963], 00:40:06.828 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9634], 95.00th=[11076], 00:40:06.828 | 99.00th=[12780], 99.50th=[14353], 99.90th=[14615], 99.95th=[14615], 00:40:06.828 | 99.99th=[15795] 00:40:06.828 bw ( KiB/s): min=30392, max=32768, per=30.33%, avg=31580.00, stdev=1680.09, samples=2 00:40:06.828 iops : min= 7598, max= 8192, avg=7895.00, stdev=420.02, samples=2 00:40:06.828 lat (msec) : 2=0.11%, 4=1.19%, 10=84.03%, 20=14.67% 00:40:06.828 cpu : usr=3.08%, sys=4.77%, ctx=694, majf=0, minf=1 00:40:06.828 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:40:06.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:06.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:06.828 issued rwts: total=7680,8022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:06.828 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:06.828 00:40:06.828 Run status group 0 (all jobs): 00:40:06.828 READ: bw=97.2MiB/s (102MB/s), 17.9MiB/s-29.8MiB/s (18.8MB/s-31.2MB/s), io=98.0MiB (103MB), run=1003-1008msec 00:40:06.828 WRITE: bw=102MiB/s (107MB/s), 18.0MiB/s-31.1MiB/s (18.9MB/s-32.6MB/s), io=103MiB (107MB), run=1003-1008msec 00:40:06.828 00:40:06.828 Disk stats (read/write): 00:40:06.828 nvme0n1: ios=5143/5452, merge=0/0, ticks=50293/47595, in_queue=97888, util=88.08% 00:40:06.828 nvme0n2: ios=3822/4096, merge=0/0, ticks=33794/31165, in_queue=64959, util=92.55% 00:40:06.828 nvme0n3: ios=5688/5759, merge=0/0, ticks=24807/32308, in_queue=57115, util=91.66% 00:40:06.828 nvme0n4: ios=6323/6656, merge=0/0, ticks=53324/49648, in_queue=102972, util=96.47% 00:40:06.828 17:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:06.828 17:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2604762 00:40:06.828 17:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:06.828 17:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:06.828 [global] 00:40:06.828 thread=1 00:40:06.828 invalidate=1 00:40:06.828 rw=read 00:40:06.828 time_based=1 00:40:06.828 runtime=10 00:40:06.828 ioengine=libaio 00:40:06.828 direct=1 00:40:06.828 bs=4096 00:40:06.828 iodepth=1 00:40:06.828 norandommap=1 00:40:06.828 numjobs=1 00:40:06.828 00:40:06.828 [job0] 00:40:06.829 filename=/dev/nvme0n1 00:40:06.829 [job1] 00:40:06.829 filename=/dev/nvme0n2 00:40:06.829 [job2] 00:40:06.829 filename=/dev/nvme0n3 00:40:06.829 [job3] 00:40:06.829 filename=/dev/nvme0n4 00:40:06.829 Could not set queue depth (nvme0n1) 00:40:06.829 Could not set queue depth (nvme0n2) 00:40:06.829 Could not set queue depth (nvme0n3) 00:40:06.829 Could not set queue depth (nvme0n4) 00:40:07.087 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.087 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.087 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.087 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:07.087 fio-3.35 00:40:07.087 Starting 4 threads 00:40:09.621 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:09.880 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:09.880 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2596864, buflen=4096 00:40:09.880 fio: pid=2604964, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.140 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.140 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:10.140 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=290816, buflen=4096 00:40:10.140 fio: pid=2604963, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.140 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=319488, buflen=4096 00:40:10.140 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.140 fio: pid=2604961, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.140 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:10.400 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=17539072, buflen=4096 00:40:10.400 fio: pid=2604962, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:10.400 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.400 17:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:10.400 00:40:10.400 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2604961: Fri Dec 6 17:06:58 2024 00:40:10.400 read: IOPS=26, BW=104KiB/s (106kB/s)(312KiB/3008msec) 00:40:10.400 slat (usec): min=6, max=28659, avg=387.70, stdev=3221.63 00:40:10.400 clat (usec): min=696, max=41525, avg=37889.78, stdev=10777.45 00:40:10.400 lat (usec): min=707, max=69934, avg=38282.29, stdev=11367.41 00:40:10.400 clat percentiles (usec): 00:40:10.400 | 1.00th=[ 693], 5.00th=[ 824], 10.00th=[40633], 20.00th=[41157], 00:40:10.400 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:10.400 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:10.400 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:10.400 | 99.99th=[41681] 00:40:10.400 bw ( KiB/s): min= 96, max= 104, per=1.54%, avg=99.20, stdev= 4.38, samples=5 00:40:10.400 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:40:10.400 lat (usec) : 750=3.80%, 1000=3.80% 00:40:10.400 lat (msec) : 50=91.14% 00:40:10.400 cpu : usr=0.00%, sys=0.10%, ctx=80, majf=0, minf=2 00:40:10.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.400 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.400 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.400 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2604962: Fri Dec 6 17:06:58 2024 00:40:10.400 read: IOPS=1354, BW=5419KiB/s (5549kB/s)(16.7MiB/3161msec) 00:40:10.400 slat (usec): min=3, max=22376, avg=37.03, stdev=440.90 00:40:10.400 clat (usec): min=307, max=1095, avg=693.53, stdev=66.11 00:40:10.400 lat (usec): min=334, max=23124, avg=730.56, stdev=448.75 00:40:10.400 clat percentiles (usec): 00:40:10.400 | 1.00th=[ 523], 5.00th=[ 578], 10.00th=[ 603], 20.00th=[ 635], 00:40:10.400 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 717], 00:40:10.400 | 70.00th=[ 734], 80.00th=[ 742], 90.00th=[ 766], 95.00th=[ 783], 00:40:10.400 | 99.00th=[ 824], 99.50th=[ 848], 99.90th=[ 873], 99.95th=[ 914], 00:40:10.400 | 99.99th=[ 1090] 00:40:10.400 bw ( KiB/s): min= 5011, max= 5656, per=85.11%, avg=5455.17, stdev=237.02, samples=6 00:40:10.400 iops : min= 1252, max= 1414, avg=1363.67, stdev=59.54, samples=6 00:40:10.400 lat (usec) : 500=0.68%, 750=82.12%, 1000=17.16% 00:40:10.400 lat (msec) : 2=0.02% 00:40:10.400 cpu : usr=1.55%, sys=3.29%, ctx=4289, majf=0, minf=1 00:40:10.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.400 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.400 issued rwts: total=4283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.400 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2604963: Fri Dec 6 17:06:58 2024 00:40:10.400 read: IOPS=25, BW=98.9KiB/s (101kB/s)(284KiB/2873msec) 00:40:10.401 slat (usec): min=25, max=2622, avg=62.58, stdev=306.00 00:40:10.401 clat (usec): min=800, max=42083, avg=40094.61, stdev=6732.06 00:40:10.401 lat (usec): min=830, max=44036, avg=40157.70, stdev=6745.36 00:40:10.401 clat percentiles (usec): 00:40:10.401 | 1.00th=[ 799], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:10.401 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:10.401 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:40:10.401 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:10.401 | 99.99th=[42206] 00:40:10.401 bw ( KiB/s): min= 96, max= 104, per=1.54%, avg=99.20, stdev= 4.38, samples=5 00:40:10.401 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:40:10.401 lat (usec) : 1000=1.39% 00:40:10.401 lat (msec) : 2=1.39%, 50=95.83% 00:40:10.401 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=2 00:40:10.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.401 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.401 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.401 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2604964: Fri Dec 6 17:06:58 2024 00:40:10.401 read: IOPS=232, BW=929KiB/s (951kB/s)(2536KiB/2730msec) 00:40:10.401 slat (nsec): min=7102, max=44125, avg=23453.11, stdev=7359.57 00:40:10.401 clat (usec): min=523, max=42063, avg=4234.00, stdev=11490.31 00:40:10.401 lat (usec): min=531, max=42089, avg=4257.45, stdev=11490.84 00:40:10.401 clat percentiles (usec): 00:40:10.401 | 1.00th=[ 586], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 693], 00:40:10.401 | 30.00th=[ 717], 40.00th=[ 734], 50.00th=[ 742], 60.00th=[ 758], 00:40:10.401 | 70.00th=[ 775], 80.00th=[ 791], 90.00th=[ 857], 95.00th=[42206], 00:40:10.401 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:10.401 | 99.99th=[42206] 00:40:10.401 bw ( KiB/s): min= 96, max= 2928, per=15.70%, avg=1006.40, stdev=1307.14, samples=5 00:40:10.401 iops : min= 24, max= 732, avg=251.60, stdev=326.79, samples=5 00:40:10.401 lat (usec) : 750=54.33%, 1000=36.69% 00:40:10.401 lat (msec) : 2=0.31%, 50=8.50% 00:40:10.401 cpu : usr=0.26%, sys=0.59%, ctx=635, majf=0, minf=2 00:40:10.401 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:10.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.401 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:10.401 issued rwts: total=635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:10.401 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:10.401 00:40:10.401 Run status group 0 (all jobs): 00:40:10.401 READ: bw=6409KiB/s (6563kB/s), 98.9KiB/s-5419KiB/s (101kB/s-5549kB/s), io=19.8MiB (20.7MB), run=2730-3161msec 00:40:10.401 00:40:10.401 Disk stats (read/write): 00:40:10.401 nvme0n1: ios=96/0, merge=0/0, ticks=2856/0, in_queue=2856, util=95.49% 00:40:10.401 nvme0n2: ios=4218/0, merge=0/0, ticks=2865/0, in_queue=2865, util=94.18% 00:40:10.401 nvme0n3: ios=71/0, merge=0/0, ticks=2849/0, in_queue=2849, util=96.45% 00:40:10.401 nvme0n4: ios=631/0, merge=0/0, ticks=2545/0, in_queue=2545, util=96.45% 00:40:10.661 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.661 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:10.661 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.661 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:10.921 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.921 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:10.921 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:10.921 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2604762 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:11.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:11.181 nvmf hotplug test: fio failed as expected 00:40:11.181 17:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:11.439 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:11.440 rmmod nvme_tcp 00:40:11.440 rmmod nvme_fabrics 00:40:11.440 rmmod nvme_keyring 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2601288 ']' 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2601288 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2601288 ']' 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2601288 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2601288 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2601288' 00:40:11.440 killing process with pid 2601288 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2601288 00:40:11.440 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2601288 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:11.699 17:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.600 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:13.600 00:40:13.600 real 0m24.715s 00:40:13.600 user 2m3.341s 00:40:13.600 sys 0m9.411s 00:40:13.601 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:13.601 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:13.601 ************************************ 00:40:13.601 END TEST nvmf_fio_target 00:40:13.601 ************************************ 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:13.859 ************************************ 00:40:13.859 START TEST nvmf_bdevio 00:40:13.859 ************************************ 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:13.859 * Looking for test storage... 00:40:13.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:13.859 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:13.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.860 --rc genhtml_branch_coverage=1 00:40:13.860 --rc genhtml_function_coverage=1 00:40:13.860 --rc genhtml_legend=1 00:40:13.860 --rc geninfo_all_blocks=1 00:40:13.860 --rc geninfo_unexecuted_blocks=1 00:40:13.860 00:40:13.860 ' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:13.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.860 --rc genhtml_branch_coverage=1 00:40:13.860 --rc genhtml_function_coverage=1 00:40:13.860 --rc genhtml_legend=1 00:40:13.860 --rc geninfo_all_blocks=1 00:40:13.860 --rc geninfo_unexecuted_blocks=1 00:40:13.860 00:40:13.860 ' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:13.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.860 --rc genhtml_branch_coverage=1 00:40:13.860 --rc genhtml_function_coverage=1 00:40:13.860 --rc genhtml_legend=1 00:40:13.860 --rc geninfo_all_blocks=1 00:40:13.860 --rc geninfo_unexecuted_blocks=1 00:40:13.860 00:40:13.860 ' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:13.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:13.860 --rc genhtml_branch_coverage=1 00:40:13.860 --rc genhtml_function_coverage=1 00:40:13.860 --rc genhtml_legend=1 00:40:13.860 --rc geninfo_all_blocks=1 00:40:13.860 --rc geninfo_unexecuted_blocks=1 00:40:13.860 00:40:13.860 ' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:13.860 17:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:19.132 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:19.132 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:19.132 Found net devices under 0000:31:00.0: cvl_0_0 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:19.132 Found net devices under 0000:31:00.1: cvl_0_1 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:19.132 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:19.133 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:19.133 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:19.133 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:19.133 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:19.133 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:19.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:19.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:40:19.391 00:40:19.391 --- 10.0.0.2 ping statistics --- 00:40:19.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.391 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:19.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:19.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:40:19.391 00:40:19.391 --- 10.0.0.1 ping statistics --- 00:40:19.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:19.391 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2610165 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2610165 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2610165 ']' 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:19.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:19.391 17:07:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.391 [2024-12-06 17:07:07.934406] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:19.391 [2024-12-06 17:07:07.935402] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:40:19.391 [2024-12-06 17:07:07.935441] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:19.391 [2024-12-06 17:07:08.007089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:19.391 [2024-12-06 17:07:08.023654] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:19.391 [2024-12-06 17:07:08.023684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:19.391 [2024-12-06 17:07:08.023689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:19.391 [2024-12-06 17:07:08.023694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:19.391 [2024-12-06 17:07:08.023698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:19.391 [2024-12-06 17:07:08.025183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:19.391 [2024-12-06 17:07:08.025442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:19.392 [2024-12-06 17:07:08.025557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:19.392 [2024-12-06 17:07:08.025558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:19.392 [2024-12-06 17:07:08.070063] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:19.392 [2024-12-06 17:07:08.071054] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:19.392 [2024-12-06 17:07:08.071459] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:19.392 [2024-12-06 17:07:08.071657] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:19.392 [2024-12-06 17:07:08.071661] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.650 [2024-12-06 17:07:08.122356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.650 Malloc0 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.650 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:19.651 [2024-12-06 17:07:08.186154] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:19.651 { 00:40:19.651 "params": { 00:40:19.651 "name": "Nvme$subsystem", 00:40:19.651 "trtype": "$TEST_TRANSPORT", 00:40:19.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:19.651 "adrfam": "ipv4", 00:40:19.651 "trsvcid": "$NVMF_PORT", 00:40:19.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:19.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:19.651 "hdgst": ${hdgst:-false}, 00:40:19.651 "ddgst": ${ddgst:-false} 00:40:19.651 }, 00:40:19.651 "method": "bdev_nvme_attach_controller" 00:40:19.651 } 00:40:19.651 EOF 00:40:19.651 )") 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:19.651 17:07:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:19.651 "params": { 00:40:19.651 "name": "Nvme1", 00:40:19.651 "trtype": "tcp", 00:40:19.651 "traddr": "10.0.0.2", 00:40:19.651 "adrfam": "ipv4", 00:40:19.651 "trsvcid": "4420", 00:40:19.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:19.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:19.651 "hdgst": false, 00:40:19.651 "ddgst": false 00:40:19.651 }, 00:40:19.651 "method": "bdev_nvme_attach_controller" 00:40:19.651 }' 00:40:19.651 [2024-12-06 17:07:08.222498] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:40:19.651 [2024-12-06 17:07:08.222552] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610340 ] 00:40:19.651 [2024-12-06 17:07:08.286468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:19.651 [2024-12-06 17:07:08.305157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:19.651 [2024-12-06 17:07:08.305475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:19.651 [2024-12-06 17:07:08.305476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:19.910 I/O targets: 00:40:19.910 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:19.910 00:40:19.910 00:40:19.910 CUnit - A unit testing framework for C - Version 2.1-3 00:40:19.910 http://cunit.sourceforge.net/ 00:40:19.910 00:40:19.910 00:40:19.910 Suite: bdevio tests on: Nvme1n1 00:40:20.169 Test: blockdev write read block ...passed 00:40:20.169 Test: blockdev write zeroes read block ...passed 00:40:20.169 Test: blockdev write zeroes read no split ...passed 00:40:20.169 Test: blockdev write zeroes read split ...passed 00:40:20.169 Test: blockdev write zeroes read split partial ...passed 00:40:20.169 Test: blockdev reset ...[2024-12-06 17:07:08.709805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:20.169 [2024-12-06 17:07:08.709859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd7230 (9): Bad file descriptor 00:40:20.169 [2024-12-06 17:07:08.805581] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:20.169 passed 00:40:20.169 Test: blockdev write read 8 blocks ...passed 00:40:20.169 Test: blockdev write read size > 128k ...passed 00:40:20.169 Test: blockdev write read invalid size ...passed 00:40:20.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:20.169 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:20.169 Test: blockdev write read max offset ...passed 00:40:20.427 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:20.427 Test: blockdev writev readv 8 blocks ...passed 00:40:20.427 Test: blockdev writev readv 30 x 1block ...passed 00:40:20.427 Test: blockdev writev readv block ...passed 00:40:20.427 Test: blockdev writev readv size > 128k ...passed 00:40:20.427 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:20.427 Test: blockdev comparev and writev ...[2024-12-06 17:07:09.029713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.029742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.029756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.029763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.030243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.030252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.030262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.030267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.030752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.030760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.030769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.030775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.031264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.031278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.031287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:20.427 [2024-12-06 17:07:09.031293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:20.427 passed 00:40:20.427 Test: blockdev nvme passthru rw ...passed 00:40:20.427 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:07:09.116762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:20.427 [2024-12-06 17:07:09.116772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.117110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:20.427 [2024-12-06 17:07:09.117117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.117465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:20.427 [2024-12-06 17:07:09.117472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:20.427 [2024-12-06 17:07:09.117814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:20.427 [2024-12-06 17:07:09.117821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:20.427 passed 00:40:20.687 Test: blockdev nvme admin passthru ...passed 00:40:20.687 Test: blockdev copy ...passed 00:40:20.687 00:40:20.687 Run Summary: Type Total Ran Passed Failed Inactive 00:40:20.687 suites 1 1 n/a 0 0 00:40:20.687 tests 23 23 23 0 0 00:40:20.687 asserts 152 152 152 0 n/a 00:40:20.687 00:40:20.687 Elapsed time = 1.154 seconds 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:20.687 rmmod nvme_tcp 00:40:20.687 rmmod nvme_fabrics 00:40:20.687 rmmod nvme_keyring 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2610165 ']' 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2610165 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2610165 ']' 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2610165 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.687 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2610165 00:40:20.947 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:20.947 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:20.947 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2610165' 00:40:20.947 killing process with pid 2610165 00:40:20.947 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2610165 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2610165 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:20.948 17:07:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:22.852 17:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:22.852 00:40:22.852 real 0m9.214s 00:40:22.852 user 0m8.356s 00:40:22.852 sys 0m4.795s 00:40:22.852 17:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.852 17:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:22.852 ************************************ 00:40:22.852 END TEST nvmf_bdevio 00:40:22.852 ************************************ 00:40:23.112 17:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:23.112 00:40:23.112 real 4m18.217s 00:40:23.112 user 9m33.302s 00:40:23.112 sys 1m35.539s 00:40:23.112 17:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:23.112 17:07:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:23.112 ************************************ 00:40:23.112 END TEST nvmf_target_core_interrupt_mode 00:40:23.112 ************************************ 00:40:23.112 17:07:11 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:23.112 17:07:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:23.112 17:07:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:23.112 17:07:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.112 ************************************ 00:40:23.112 START TEST nvmf_interrupt 00:40:23.112 ************************************ 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:23.112 * Looking for test storage... 00:40:23.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.112 --rc genhtml_branch_coverage=1 00:40:23.112 --rc genhtml_function_coverage=1 00:40:23.112 --rc genhtml_legend=1 00:40:23.112 --rc geninfo_all_blocks=1 00:40:23.112 --rc geninfo_unexecuted_blocks=1 00:40:23.112 00:40:23.112 ' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.112 --rc genhtml_branch_coverage=1 00:40:23.112 --rc genhtml_function_coverage=1 00:40:23.112 --rc genhtml_legend=1 00:40:23.112 --rc geninfo_all_blocks=1 00:40:23.112 --rc geninfo_unexecuted_blocks=1 00:40:23.112 00:40:23.112 ' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.112 --rc genhtml_branch_coverage=1 00:40:23.112 --rc genhtml_function_coverage=1 00:40:23.112 --rc genhtml_legend=1 00:40:23.112 --rc geninfo_all_blocks=1 00:40:23.112 --rc geninfo_unexecuted_blocks=1 00:40:23.112 00:40:23.112 ' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:23.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:23.112 --rc genhtml_branch_coverage=1 00:40:23.112 --rc genhtml_function_coverage=1 00:40:23.112 --rc genhtml_legend=1 00:40:23.112 --rc geninfo_all_blocks=1 00:40:23.112 --rc geninfo_unexecuted_blocks=1 00:40:23.112 00:40:23.112 ' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.112 17:07:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:23.113 17:07:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:40:28.391 Found 0000:31:00.0 (0x8086 - 0x159b) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:40:28.391 Found 0000:31:00.1 (0x8086 - 0x159b) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:40:28.391 Found net devices under 0000:31:00.0: cvl_0_0 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:40:28.391 Found net devices under 0000:31:00.1: cvl_0_1 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:28.391 17:07:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:28.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:28.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:40:28.650 00:40:28.650 --- 10.0.0.2 ping statistics --- 00:40:28.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.650 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:28.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:28.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:40:28.650 00:40:28.650 --- 10.0.0.1 ping statistics --- 00:40:28.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:28.650 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2614800 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2614800 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2614800 ']' 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:28.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:28.650 17:07:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:28.650 [2024-12-06 17:07:17.279381] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:28.650 [2024-12-06 17:07:17.280428] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:40:28.650 [2024-12-06 17:07:17.280469] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:28.911 [2024-12-06 17:07:17.370346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:28.911 [2024-12-06 17:07:17.397709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:28.911 [2024-12-06 17:07:17.397758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:28.911 [2024-12-06 17:07:17.397767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:28.911 [2024-12-06 17:07:17.397774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:28.911 [2024-12-06 17:07:17.397780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:28.911 [2024-12-06 17:07:17.399352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:28.911 [2024-12-06 17:07:17.399359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.911 [2024-12-06 17:07:17.464382] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:28.911 [2024-12-06 17:07:17.464476] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:28.911 [2024-12-06 17:07:17.464621] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:29.481 5000+0 records in 00:40:29.481 5000+0 records out 00:40:29.481 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00890923 s, 1.1 GB/s 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:29.481 AIO0 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.481 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:29.481 [2024-12-06 17:07:18.172311] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:29.741 [2024-12-06 17:07:18.200763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2614800 0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614800 0 idle 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614800 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.23 reactor_0' 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614800 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.23 reactor_0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2614800 1 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614800 1 idle 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:29.741 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614849 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614849 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2615081 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2614800 0 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2614800 0 busy 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:30.001 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614800 root 20 0 128.2g 44928 32256 R 40.0 0.0 0:00.30 reactor_0' 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614800 root 20 0 128.2g 44928 32256 R 40.0 0.0 0:00.30 reactor_0 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=40.0 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=40 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2614800 1 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2614800 1 busy 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614849 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.20 reactor_1' 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614849 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.20 reactor_1 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:30.262 17:07:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2615081 00:40:40.246 Initializing NVMe Controllers 00:40:40.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:40.246 Controller IO queue size 256, less than required. 00:40:40.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:40.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:40.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:40.246 Initialization complete. Launching workers. 00:40:40.246 ======================================================== 00:40:40.246 Latency(us) 00:40:40.246 Device Information : IOPS MiB/s Average min max 00:40:40.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20878.20 81.56 12265.12 3594.30 20342.35 00:40:40.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 21630.20 84.49 11839.07 3590.93 20006.77 00:40:40.246 ======================================================== 00:40:40.246 Total : 42508.40 166.05 12048.33 3590.93 20342.35 00:40:40.246 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2614800 0 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614800 0 idle 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614800 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.23 reactor_0' 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614800 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.23 reactor_0 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2614800 1 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614800 1 idle 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:40.246 17:07:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614849 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.99 reactor_1' 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614849 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.99 reactor_1 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:40.507 17:07:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:41.076 17:07:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:41.076 17:07:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:41.076 17:07:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:41.076 17:07:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:41.076 17:07:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2614800 0 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614800 0 idle 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:42.988 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614800 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.38 reactor_0' 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614800 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.38 reactor_0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2614800 1 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614800 1 idle 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614800 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614800 -w 256 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614849 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.04 reactor_1' 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614849 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.04 reactor_1 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:43.248 17:07:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:43.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:43.508 rmmod nvme_tcp 00:40:43.508 rmmod nvme_fabrics 00:40:43.508 rmmod nvme_keyring 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2614800 ']' 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2614800 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2614800 ']' 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2614800 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2614800 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2614800' 00:40:43.508 killing process with pid 2614800 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2614800 00:40:43.508 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2614800 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:43.767 17:07:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.676 17:07:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:45.676 00:40:45.676 real 0m22.700s 00:40:45.676 user 0m39.687s 00:40:45.676 sys 0m7.431s 00:40:45.676 17:07:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:45.676 17:07:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:45.676 ************************************ 00:40:45.676 END TEST nvmf_interrupt 00:40:45.676 ************************************ 00:40:45.676 00:40:45.676 real 33m56.539s 00:40:45.676 user 86m12.950s 00:40:45.676 sys 8m59.834s 00:40:45.676 17:07:34 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:45.676 17:07:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:45.676 ************************************ 00:40:45.676 END TEST nvmf_tcp 00:40:45.676 ************************************ 00:40:45.676 17:07:34 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:45.676 17:07:34 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:45.676 17:07:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:45.676 17:07:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:45.676 17:07:34 -- common/autotest_common.sh@10 -- # set +x 00:40:45.935 ************************************ 00:40:45.935 START TEST spdkcli_nvmf_tcp 00:40:45.935 ************************************ 00:40:45.935 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:45.935 * Looking for test storage... 00:40:45.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:45.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.936 --rc genhtml_branch_coverage=1 00:40:45.936 --rc genhtml_function_coverage=1 00:40:45.936 --rc genhtml_legend=1 00:40:45.936 --rc geninfo_all_blocks=1 00:40:45.936 --rc geninfo_unexecuted_blocks=1 00:40:45.936 00:40:45.936 ' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:45.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.936 --rc genhtml_branch_coverage=1 00:40:45.936 --rc genhtml_function_coverage=1 00:40:45.936 --rc genhtml_legend=1 00:40:45.936 --rc geninfo_all_blocks=1 00:40:45.936 --rc geninfo_unexecuted_blocks=1 00:40:45.936 00:40:45.936 ' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:45.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.936 --rc genhtml_branch_coverage=1 00:40:45.936 --rc genhtml_function_coverage=1 00:40:45.936 --rc genhtml_legend=1 00:40:45.936 --rc geninfo_all_blocks=1 00:40:45.936 --rc geninfo_unexecuted_blocks=1 00:40:45.936 00:40:45.936 ' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:45.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.936 --rc genhtml_branch_coverage=1 00:40:45.936 --rc genhtml_function_coverage=1 00:40:45.936 --rc genhtml_legend=1 00:40:45.936 --rc geninfo_all_blocks=1 00:40:45.936 --rc geninfo_unexecuted_blocks=1 00:40:45.936 00:40:45.936 ' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:45.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2618568 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2618568 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2618568 ']' 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:45.936 17:07:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:45.937 [2024-12-06 17:07:34.553303] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:40:45.937 [2024-12-06 17:07:34.553360] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2618568 ] 00:40:45.937 [2024-12-06 17:07:34.617117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:46.197 [2024-12-06 17:07:34.634933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:46.197 [2024-12-06 17:07:34.634937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:46.197 17:07:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:46.197 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:46.197 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:46.197 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:46.197 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:46.197 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:46.197 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:46.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:46.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:46.197 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:46.197 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:46.197 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:46.197 ' 00:40:48.731 [2024-12-06 17:07:37.131958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:50.109 [2024-12-06 17:07:38.363730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:52.015 [2024-12-06 17:07:40.658201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:54.011 [2024-12-06 17:07:42.627842] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:55.918 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:55.918 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:55.918 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:55.918 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:55.918 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:55.918 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:55.918 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:55.918 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:55.918 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:55.918 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:55.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:55.918 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:55.918 17:07:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:56.177 17:07:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:56.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:56.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:56.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:56.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:56.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:56.177 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:56.177 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:56.177 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:56.178 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:56.178 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:56.178 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:56.178 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:56.178 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:56.178 ' 00:41:01.447 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:01.447 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:01.447 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:01.447 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:01.447 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:01.447 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:01.447 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:01.447 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:01.447 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:01.447 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:01.447 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:01.447 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:01.447 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:01.447 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2618568 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2618568 ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2618568 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2618568 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2618568' 00:41:01.447 killing process with pid 2618568 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2618568 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2618568 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2618568 ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2618568 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2618568 ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2618568 00:41:01.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2618568) - No such process 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2618568 is not found' 00:41:01.447 Process with pid 2618568 is not found 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:01.447 00:41:01.447 real 0m15.621s 00:41:01.447 user 0m33.315s 00:41:01.447 sys 0m0.564s 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:01.447 17:07:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:01.447 ************************************ 00:41:01.447 END TEST spdkcli_nvmf_tcp 00:41:01.447 ************************************ 00:41:01.447 17:07:50 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:01.447 17:07:50 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:01.447 17:07:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:01.447 17:07:50 -- common/autotest_common.sh@10 -- # set +x 00:41:01.447 ************************************ 00:41:01.447 START TEST nvmf_identify_passthru 00:41:01.447 ************************************ 00:41:01.447 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:01.447 * Looking for test storage... 00:41:01.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:01.447 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:01.447 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:01.447 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:01.706 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:01.706 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:01.706 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:01.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.706 --rc genhtml_branch_coverage=1 00:41:01.706 --rc genhtml_function_coverage=1 00:41:01.706 --rc genhtml_legend=1 00:41:01.706 --rc geninfo_all_blocks=1 00:41:01.706 --rc geninfo_unexecuted_blocks=1 00:41:01.706 00:41:01.706 ' 00:41:01.706 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:01.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.706 --rc genhtml_branch_coverage=1 00:41:01.706 --rc genhtml_function_coverage=1 00:41:01.706 --rc genhtml_legend=1 00:41:01.706 --rc geninfo_all_blocks=1 00:41:01.706 --rc geninfo_unexecuted_blocks=1 00:41:01.706 00:41:01.706 ' 00:41:01.706 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:01.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.706 --rc genhtml_branch_coverage=1 00:41:01.706 --rc genhtml_function_coverage=1 00:41:01.706 --rc genhtml_legend=1 00:41:01.706 --rc geninfo_all_blocks=1 00:41:01.706 --rc geninfo_unexecuted_blocks=1 00:41:01.706 00:41:01.706 ' 00:41:01.706 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:01.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:01.706 --rc genhtml_branch_coverage=1 00:41:01.706 --rc genhtml_function_coverage=1 00:41:01.706 --rc genhtml_legend=1 00:41:01.706 --rc geninfo_all_blocks=1 00:41:01.706 --rc geninfo_unexecuted_blocks=1 00:41:01.706 00:41:01.706 ' 00:41:01.706 17:07:50 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.706 17:07:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.706 17:07:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.706 17:07:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.706 17:07:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.706 17:07:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:01.706 17:07:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.706 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:01.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:01.707 17:07:50 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:01.707 17:07:50 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:01.707 17:07:50 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:01.707 17:07:50 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:01.707 17:07:50 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:01.707 17:07:50 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.707 17:07:50 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.707 17:07:50 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.707 17:07:50 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:01.707 17:07:50 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:01.707 17:07:50 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.707 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:01.707 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:01.707 17:07:50 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:01.707 17:07:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:06.979 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:06.980 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:06.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:06.980 Found net devices under 0000:31:00.0: cvl_0_0 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:06.980 Found net devices under 0000:31:00.1: cvl_0_1 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:06.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:06.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:41:06.980 00:41:06.980 --- 10.0.0.2 ping statistics --- 00:41:06.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.980 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:06.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:06.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:41:06.980 00:41:06.980 --- 10.0.0.1 ping statistics --- 00:41:06.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:06.980 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:06.980 17:07:55 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:06.980 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:06.980 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:41:06.980 17:07:55 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:41:06.980 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:41:06.980 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:41:06.980 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:06.980 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:41:06.980 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:07.550 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:41:07.550 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:07.550 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:07.550 17:07:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:41:07.811 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:41:07.811 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.811 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.811 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2625975 00:41:07.811 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:07.811 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2625975 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2625975 ']' 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:07.811 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:07.811 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:07.811 [2024-12-06 17:07:56.485583] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:41:07.811 [2024-12-06 17:07:56.485638] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:08.071 [2024-12-06 17:07:56.556570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:08.071 [2024-12-06 17:07:56.574176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:08.071 [2024-12-06 17:07:56.574208] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:08.071 [2024-12-06 17:07:56.574214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:08.071 [2024-12-06 17:07:56.574219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:08.071 [2024-12-06 17:07:56.574223] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:08.071 [2024-12-06 17:07:56.575465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:08.071 [2024-12-06 17:07:56.575623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:08.071 [2024-12-06 17:07:56.575774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:08.071 [2024-12-06 17:07:56.575776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:08.071 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.071 INFO: Log level set to 20 00:41:08.071 INFO: Requests: 00:41:08.071 { 00:41:08.071 "jsonrpc": "2.0", 00:41:08.071 "method": "nvmf_set_config", 00:41:08.071 "id": 1, 00:41:08.071 "params": { 00:41:08.071 "admin_cmd_passthru": { 00:41:08.071 "identify_ctrlr": true 00:41:08.071 } 00:41:08.071 } 00:41:08.071 } 00:41:08.071 00:41:08.071 INFO: response: 00:41:08.071 { 00:41:08.071 "jsonrpc": "2.0", 00:41:08.071 "id": 1, 00:41:08.071 "result": true 00:41:08.071 } 00:41:08.071 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.071 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.071 INFO: Setting log level to 20 00:41:08.071 INFO: Setting log level to 20 00:41:08.071 INFO: Log level set to 20 00:41:08.071 INFO: Log level set to 20 00:41:08.071 INFO: Requests: 00:41:08.071 { 00:41:08.071 "jsonrpc": "2.0", 00:41:08.071 "method": "framework_start_init", 00:41:08.071 "id": 1 00:41:08.071 } 00:41:08.071 00:41:08.071 INFO: Requests: 00:41:08.071 { 00:41:08.071 "jsonrpc": "2.0", 00:41:08.071 "method": "framework_start_init", 00:41:08.071 "id": 1 00:41:08.071 } 00:41:08.071 00:41:08.071 [2024-12-06 17:07:56.665500] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:08.071 INFO: response: 00:41:08.071 { 00:41:08.071 "jsonrpc": "2.0", 00:41:08.071 "id": 1, 00:41:08.071 "result": true 00:41:08.071 } 00:41:08.071 00:41:08.071 INFO: response: 00:41:08.071 { 00:41:08.071 "jsonrpc": "2.0", 00:41:08.071 "id": 1, 00:41:08.071 "result": true 00:41:08.071 } 00:41:08.071 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.071 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.071 INFO: Setting log level to 40 00:41:08.071 INFO: Setting log level to 40 00:41:08.071 INFO: Setting log level to 40 00:41:08.071 [2024-12-06 17:07:56.674530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.071 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.071 17:07:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.071 17:07:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.331 Nvme0n1 00:41:08.331 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.331 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:08.331 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.331 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.591 [2024-12-06 17:07:57.039063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.591 [ 00:41:08.591 { 00:41:08.591 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:08.591 "subtype": "Discovery", 00:41:08.591 "listen_addresses": [], 00:41:08.591 "allow_any_host": true, 00:41:08.591 "hosts": [] 00:41:08.591 }, 00:41:08.591 { 00:41:08.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:08.591 "subtype": "NVMe", 00:41:08.591 "listen_addresses": [ 00:41:08.591 { 00:41:08.591 "trtype": "TCP", 00:41:08.591 "adrfam": "IPv4", 00:41:08.591 "traddr": "10.0.0.2", 00:41:08.591 "trsvcid": "4420" 00:41:08.591 } 00:41:08.591 ], 00:41:08.591 "allow_any_host": true, 00:41:08.591 "hosts": [], 00:41:08.591 "serial_number": "SPDK00000000000001", 00:41:08.591 "model_number": "SPDK bdev Controller", 00:41:08.591 "max_namespaces": 1, 00:41:08.591 "min_cntlid": 1, 00:41:08.591 "max_cntlid": 65519, 00:41:08.591 "namespaces": [ 00:41:08.591 { 00:41:08.591 "nsid": 1, 00:41:08.591 "bdev_name": "Nvme0n1", 00:41:08.591 "name": "Nvme0n1", 00:41:08.591 "nguid": "363447305260549900253845000000A3", 00:41:08.591 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:41:08.591 } 00:41:08.591 ] 00:41:08.591 } 00:41:08.591 ] 00:41:08.591 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:08.591 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:08.851 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:41:08.851 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:41:08.851 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:41:08.851 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:08.851 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:08.851 17:07:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:08.851 rmmod nvme_tcp 00:41:08.851 rmmod nvme_fabrics 00:41:08.851 rmmod nvme_keyring 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2625975 ']' 00:41:08.851 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2625975 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2625975 ']' 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2625975 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2625975 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2625975' 00:41:08.851 killing process with pid 2625975 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2625975 00:41:08.851 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2625975 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:09.110 17:07:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:09.110 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:09.110 17:07:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.644 17:07:59 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:11.644 00:41:11.644 real 0m9.761s 00:41:11.644 user 0m6.290s 00:41:11.644 sys 0m4.461s 00:41:11.644 17:07:59 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:11.644 17:07:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:11.644 ************************************ 00:41:11.644 END TEST nvmf_identify_passthru 00:41:11.644 ************************************ 00:41:11.644 17:07:59 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:11.644 17:07:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:11.644 17:07:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:11.644 17:07:59 -- common/autotest_common.sh@10 -- # set +x 00:41:11.644 ************************************ 00:41:11.644 START TEST nvmf_dif 00:41:11.644 ************************************ 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:11.644 * Looking for test storage... 00:41:11.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:11.644 17:07:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:11.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.644 --rc genhtml_branch_coverage=1 00:41:11.644 --rc genhtml_function_coverage=1 00:41:11.644 --rc genhtml_legend=1 00:41:11.644 --rc geninfo_all_blocks=1 00:41:11.644 --rc geninfo_unexecuted_blocks=1 00:41:11.644 00:41:11.644 ' 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:11.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.644 --rc genhtml_branch_coverage=1 00:41:11.644 --rc genhtml_function_coverage=1 00:41:11.644 --rc genhtml_legend=1 00:41:11.644 --rc geninfo_all_blocks=1 00:41:11.644 --rc geninfo_unexecuted_blocks=1 00:41:11.644 00:41:11.644 ' 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:11.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.644 --rc genhtml_branch_coverage=1 00:41:11.644 --rc genhtml_function_coverage=1 00:41:11.644 --rc genhtml_legend=1 00:41:11.644 --rc geninfo_all_blocks=1 00:41:11.644 --rc geninfo_unexecuted_blocks=1 00:41:11.644 00:41:11.644 ' 00:41:11.644 17:07:59 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:11.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.644 --rc genhtml_branch_coverage=1 00:41:11.644 --rc genhtml_function_coverage=1 00:41:11.644 --rc genhtml_legend=1 00:41:11.644 --rc geninfo_all_blocks=1 00:41:11.644 --rc geninfo_unexecuted_blocks=1 00:41:11.644 00:41:11.644 ' 00:41:11.644 17:07:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:11.644 17:07:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:11.644 17:08:00 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:11.644 17:08:00 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:11.644 17:08:00 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:11.644 17:08:00 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:11.644 17:08:00 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.644 17:08:00 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.644 17:08:00 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.644 17:08:00 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:11.644 17:08:00 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:11.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:11.644 17:08:00 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:11.644 17:08:00 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:11.644 17:08:00 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:11.644 17:08:00 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:11.644 17:08:00 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:11.644 17:08:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.644 17:08:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:11.645 17:08:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.645 17:08:00 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:11.645 17:08:00 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:11.645 17:08:00 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:11.645 17:08:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:41:16.914 Found 0000:31:00.0 (0x8086 - 0x159b) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:41:16.914 Found 0000:31:00.1 (0x8086 - 0x159b) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:41:16.914 Found net devices under 0000:31:00.0: cvl_0_0 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:41:16.914 Found net devices under 0000:31:00.1: cvl_0_1 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:16.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:16.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:41:16.914 00:41:16.914 --- 10.0.0.2 ping statistics --- 00:41:16.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:16.914 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:16.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:16.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:41:16.914 00:41:16.914 --- 10.0.0.1 ping statistics --- 00:41:16.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:16.914 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:16.914 17:08:05 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:18.816 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:18.816 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:18.816 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:18.816 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:18.816 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:18.816 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:18.816 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:18.816 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:41:19.074 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:19.074 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:19.333 17:08:07 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:19.333 17:08:07 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2632102 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2632102 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2632102 ']' 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:19.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:19.333 17:08:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.333 17:08:07 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:19.333 [2024-12-06 17:08:07.954066] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:41:19.333 [2024-12-06 17:08:07.954137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:19.592 [2024-12-06 17:08:08.047484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.592 [2024-12-06 17:08:08.074404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:19.592 [2024-12-06 17:08:08.074454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:19.592 [2024-12-06 17:08:08.074462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:19.592 [2024-12-06 17:08:08.074469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:19.592 [2024-12-06 17:08:08.074475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:19.592 [2024-12-06 17:08:08.075208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:20.159 17:08:08 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.159 17:08:08 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:20.159 17:08:08 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:20.159 17:08:08 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.159 [2024-12-06 17:08:08.775369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.159 17:08:08 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.159 17:08:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:20.159 ************************************ 00:41:20.159 START TEST fio_dif_1_default 00:41:20.159 ************************************ 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.159 bdev_null0 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:20.159 [2024-12-06 17:08:08.835676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:20.159 { 00:41:20.159 "params": { 00:41:20.159 "name": "Nvme$subsystem", 00:41:20.159 "trtype": "$TEST_TRANSPORT", 00:41:20.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:20.159 "adrfam": "ipv4", 00:41:20.159 "trsvcid": "$NVMF_PORT", 00:41:20.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:20.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:20.159 "hdgst": ${hdgst:-false}, 00:41:20.159 "ddgst": ${ddgst:-false} 00:41:20.159 }, 00:41:20.159 "method": "bdev_nvme_attach_controller" 00:41:20.159 } 00:41:20.159 EOF 00:41:20.159 )") 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:20.159 17:08:08 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:20.159 "params": { 00:41:20.159 "name": "Nvme0", 00:41:20.159 "trtype": "tcp", 00:41:20.159 "traddr": "10.0.0.2", 00:41:20.159 "adrfam": "ipv4", 00:41:20.159 "trsvcid": "4420", 00:41:20.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:20.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:20.159 "hdgst": false, 00:41:20.159 "ddgst": false 00:41:20.159 }, 00:41:20.159 "method": "bdev_nvme_attach_controller" 00:41:20.159 }' 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:20.440 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:20.441 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:20.441 17:08:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:20.700 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:20.700 fio-3.35 00:41:20.700 Starting 1 thread 00:41:32.920 00:41:32.920 filename0: (groupid=0, jobs=1): err= 0: pid=2632657: Fri Dec 6 17:08:19 2024 00:41:32.920 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:41:32.920 slat (nsec): min=4107, max=17035, avg=5771.43, stdev=842.98 00:41:32.920 clat (usec): min=40823, max=45114, avg=41005.28, stdev=271.06 00:41:32.920 lat (usec): min=40829, max=45128, avg=41011.05, stdev=271.14 00:41:32.920 clat percentiles (usec): 00:41:32.920 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:32.921 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:32.921 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:32.921 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:41:32.921 | 99.99th=[45351] 00:41:32.921 bw ( KiB/s): min= 384, max= 416, per=99.48%, avg=388.80, stdev=11.72, samples=20 00:41:32.921 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:32.921 lat (msec) : 50=100.00% 00:41:32.921 cpu : usr=93.60%, sys=6.20%, ctx=14, majf=0, minf=221 00:41:32.921 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:32.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:32.921 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:32.921 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:32.921 00:41:32.921 Run status group 0 (all jobs): 00:41:32.921 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10010-10010msec 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 00:41:32.921 real 0m11.022s 00:41:32.921 user 0m22.657s 00:41:32.921 sys 0m0.875s 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 ************************************ 00:41:32.921 END TEST fio_dif_1_default 00:41:32.921 ************************************ 00:41:32.921 17:08:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:32.921 17:08:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:32.921 17:08:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 ************************************ 00:41:32.921 START TEST fio_dif_1_multi_subsystems 00:41:32.921 ************************************ 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 bdev_null0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 [2024-12-06 17:08:19.897263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 bdev_null1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:32.921 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.922 { 00:41:32.922 "params": { 00:41:32.922 "name": "Nvme$subsystem", 00:41:32.922 "trtype": "$TEST_TRANSPORT", 00:41:32.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.922 "adrfam": "ipv4", 00:41:32.922 "trsvcid": "$NVMF_PORT", 00:41:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.922 "hdgst": ${hdgst:-false}, 00:41:32.922 "ddgst": ${ddgst:-false} 00:41:32.922 }, 00:41:32.922 "method": "bdev_nvme_attach_controller" 00:41:32.922 } 00:41:32.922 EOF 00:41:32.922 )") 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.922 { 00:41:32.922 "params": { 00:41:32.922 "name": "Nvme$subsystem", 00:41:32.922 "trtype": "$TEST_TRANSPORT", 00:41:32.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.922 "adrfam": "ipv4", 00:41:32.922 "trsvcid": "$NVMF_PORT", 00:41:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.922 "hdgst": ${hdgst:-false}, 00:41:32.922 "ddgst": ${ddgst:-false} 00:41:32.922 }, 00:41:32.922 "method": "bdev_nvme_attach_controller" 00:41:32.922 } 00:41:32.922 EOF 00:41:32.922 )") 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:32.922 "params": { 00:41:32.922 "name": "Nvme0", 00:41:32.922 "trtype": "tcp", 00:41:32.922 "traddr": "10.0.0.2", 00:41:32.922 "adrfam": "ipv4", 00:41:32.922 "trsvcid": "4420", 00:41:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:32.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:32.922 "hdgst": false, 00:41:32.922 "ddgst": false 00:41:32.922 }, 00:41:32.922 "method": "bdev_nvme_attach_controller" 00:41:32.922 },{ 00:41:32.922 "params": { 00:41:32.922 "name": "Nvme1", 00:41:32.922 "trtype": "tcp", 00:41:32.922 "traddr": "10.0.0.2", 00:41:32.922 "adrfam": "ipv4", 00:41:32.922 "trsvcid": "4420", 00:41:32.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:32.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:32.922 "hdgst": false, 00:41:32.922 "ddgst": false 00:41:32.922 }, 00:41:32.922 "method": "bdev_nvme_attach_controller" 00:41:32.922 }' 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:32.922 17:08:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:32.922 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:32.922 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:32.922 fio-3.35 00:41:32.922 Starting 2 threads 00:41:42.907 00:41:42.907 filename0: (groupid=0, jobs=1): err= 0: pid=2635273: Fri Dec 6 17:08:30 2024 00:41:42.907 read: IOPS=190, BW=760KiB/s (778kB/s)(7616KiB/10018msec) 00:41:42.907 slat (nsec): min=2825, max=33678, avg=5755.49, stdev=1095.39 00:41:42.907 clat (usec): min=436, max=45914, avg=21030.48, stdev=20148.74 00:41:42.907 lat (usec): min=443, max=45927, avg=21036.23, stdev=20148.67 00:41:42.907 clat percentiles (usec): 00:41:42.907 | 1.00th=[ 570], 5.00th=[ 734], 10.00th=[ 775], 20.00th=[ 922], 00:41:42.907 | 30.00th=[ 938], 40.00th=[ 963], 50.00th=[ 1123], 60.00th=[41157], 00:41:42.907 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:42.907 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:41:42.907 | 99.99th=[45876] 00:41:42.907 bw ( KiB/s): min= 672, max= 832, per=49.93%, avg=760.00, stdev=32.63, samples=20 00:41:42.907 iops : min= 168, max= 208, avg=190.00, stdev= 8.16, samples=20 00:41:42.907 lat (usec) : 500=0.42%, 750=6.93%, 1000=40.49% 00:41:42.907 lat (msec) : 2=2.15%, 50=50.00% 00:41:42.907 cpu : usr=95.57%, sys=4.24%, ctx=9, majf=0, minf=94 00:41:42.907 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:42.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.907 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:42.907 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:42.907 filename1: (groupid=0, jobs=1): err= 0: pid=2635274: Fri Dec 6 17:08:30 2024 00:41:42.907 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10006msec) 00:41:42.907 slat (nsec): min=4195, max=23970, avg=5847.42, stdev=985.89 00:41:42.907 clat (usec): min=482, max=42699, avg=20961.09, stdev=20231.99 00:41:42.907 lat (usec): min=487, max=42707, avg=20966.94, stdev=20232.01 00:41:42.907 clat percentiles (usec): 00:41:42.907 | 1.00th=[ 553], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 791], 00:41:42.907 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[ 4555], 60.00th=[41157], 00:41:42.907 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:42.907 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:42.907 | 99.99th=[42730] 00:41:42.907 bw ( KiB/s): min= 704, max= 768, per=50.00%, avg=761.60, stdev=19.70, samples=20 00:41:42.907 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:41:42.907 lat (usec) : 500=0.52%, 750=18.24%, 1000=30.71% 00:41:42.907 lat (msec) : 2=0.42%, 10=0.21%, 50=49.90% 00:41:42.907 cpu : usr=95.45%, sys=4.36%, ctx=8, majf=0, minf=147 00:41:42.907 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:42.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:42.907 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:42.907 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:42.907 00:41:42.907 Run status group 0 (all jobs): 00:41:42.907 READ: bw=1522KiB/s (1559kB/s), 760KiB/s-763KiB/s (778kB/s-781kB/s), io=14.9MiB (15.6MB), run=10006-10018msec 00:41:42.907 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:42.907 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:42.907 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:42.907 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:42.907 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 00:41:42.908 real 0m11.170s 00:41:42.908 user 0m33.783s 00:41:42.908 sys 0m1.193s 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 ************************************ 00:41:42.908 END TEST fio_dif_1_multi_subsystems 00:41:42.908 ************************************ 00:41:42.908 17:08:31 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:42.908 17:08:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:42.908 17:08:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 ************************************ 00:41:42.908 START TEST fio_dif_rand_params 00:41:42.908 ************************************ 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 bdev_null0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:42.908 [2024-12-06 17:08:31.115116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:42.908 { 00:41:42.908 "params": { 00:41:42.908 "name": "Nvme$subsystem", 00:41:42.908 "trtype": "$TEST_TRANSPORT", 00:41:42.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:42.908 "adrfam": "ipv4", 00:41:42.908 "trsvcid": "$NVMF_PORT", 00:41:42.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:42.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:42.908 "hdgst": ${hdgst:-false}, 00:41:42.908 "ddgst": ${ddgst:-false} 00:41:42.908 }, 00:41:42.908 "method": "bdev_nvme_attach_controller" 00:41:42.908 } 00:41:42.908 EOF 00:41:42.908 )") 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:42.908 "params": { 00:41:42.908 "name": "Nvme0", 00:41:42.908 "trtype": "tcp", 00:41:42.908 "traddr": "10.0.0.2", 00:41:42.908 "adrfam": "ipv4", 00:41:42.908 "trsvcid": "4420", 00:41:42.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:42.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:42.908 "hdgst": false, 00:41:42.908 "ddgst": false 00:41:42.908 }, 00:41:42.908 "method": "bdev_nvme_attach_controller" 00:41:42.908 }' 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:42.908 17:08:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:42.908 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:42.908 ... 00:41:42.908 fio-3.35 00:41:42.908 Starting 3 threads 00:41:49.474 00:41:49.474 filename0: (groupid=0, jobs=1): err= 0: pid=2637814: Fri Dec 6 17:08:36 2024 00:41:49.474 read: IOPS=372, BW=46.6MiB/s (48.9MB/s)(235MiB/5046msec) 00:41:49.474 slat (usec): min=3, max=110, avg= 7.13, stdev= 2.85 00:41:49.474 clat (usec): min=4214, max=49299, avg=8012.73, stdev=3361.30 00:41:49.474 lat (usec): min=4220, max=49306, avg=8019.86, stdev=3361.26 00:41:49.474 clat percentiles (usec): 00:41:49.474 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 6521], 00:41:49.474 | 30.00th=[ 6980], 40.00th=[ 7308], 50.00th=[ 7635], 60.00th=[ 7898], 00:41:49.474 | 70.00th=[ 8356], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[10552], 00:41:49.474 | 99.00th=[11994], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:41:49.474 | 99.99th=[49546] 00:41:49.474 bw ( KiB/s): min=43264, max=51456, per=41.47%, avg=48102.40, stdev=2564.12, samples=10 00:41:49.474 iops : min= 338, max= 402, avg=375.80, stdev=20.03, samples=10 00:41:49.474 lat (msec) : 10=91.13%, 20=8.29%, 50=0.58% 00:41:49.474 cpu : usr=95.90%, sys=3.87%, ctx=10, majf=0, minf=156 00:41:49.474 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.474 issued rwts: total=1882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.474 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.474 filename0: (groupid=0, jobs=1): err= 0: pid=2637815: Fri Dec 6 17:08:36 2024 00:41:49.474 read: IOPS=185, BW=23.2MiB/s (24.4MB/s)(116MiB/5008msec) 00:41:49.474 slat (nsec): min=4380, max=25308, avg=7758.31, stdev=1478.58 00:41:49.474 clat (usec): min=5190, max=91836, avg=16122.53, stdev=17884.71 00:41:49.474 lat (usec): min=5196, max=91842, avg=16130.29, stdev=17884.67 00:41:49.474 clat percentiles (usec): 00:41:49.474 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 6980], 20.00th=[ 7635], 00:41:49.474 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:41:49.474 | 70.00th=[ 9896], 80.00th=[10814], 90.00th=[49546], 95.00th=[50594], 00:41:49.474 | 99.00th=[89654], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:41:49.474 | 99.99th=[91751] 00:41:49.474 bw ( KiB/s): min=17920, max=30464, per=20.48%, avg=23756.80, stdev=4327.33, samples=10 00:41:49.474 iops : min= 140, max= 238, avg=185.60, stdev=33.81, samples=10 00:41:49.474 lat (msec) : 10=71.97%, 20=11.60%, 50=8.38%, 100=8.06% 00:41:49.474 cpu : usr=96.56%, sys=3.16%, ctx=11, majf=0, minf=72 00:41:49.474 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.474 issued rwts: total=931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.474 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.474 filename0: (groupid=0, jobs=1): err= 0: pid=2637816: Fri Dec 6 17:08:36 2024 00:41:49.474 read: IOPS=348, BW=43.6MiB/s (45.7MB/s)(220MiB/5045msec) 00:41:49.474 slat (nsec): min=4386, max=36803, avg=7124.88, stdev=1378.35 00:41:49.474 clat (usec): min=4311, max=89292, avg=8565.75, stdev=5086.09 00:41:49.474 lat (usec): min=4317, max=89298, avg=8572.88, stdev=5086.04 00:41:49.474 clat percentiles (usec): 00:41:49.474 | 1.00th=[ 5211], 5.00th=[ 5800], 10.00th=[ 6128], 20.00th=[ 6587], 00:41:49.474 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7832], 60.00th=[ 8291], 00:41:49.474 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11076], 00:41:49.474 | 99.00th=[47973], 99.50th=[48497], 99.90th=[52691], 99.95th=[89654], 00:41:49.474 | 99.99th=[89654] 00:41:49.474 bw ( KiB/s): min=36608, max=48896, per=38.80%, avg=45004.80, stdev=3857.60, samples=10 00:41:49.474 iops : min= 286, max= 382, avg=351.60, stdev=30.14, samples=10 00:41:49.474 lat (msec) : 10=85.28%, 20=13.47%, 50=0.97%, 100=0.28% 00:41:49.474 cpu : usr=95.76%, sys=3.98%, ctx=11, majf=0, minf=99 00:41:49.474 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:49.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:49.474 issued rwts: total=1760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:49.475 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:49.475 00:41:49.475 Run status group 0 (all jobs): 00:41:49.475 READ: bw=113MiB/s (119MB/s), 23.2MiB/s-46.6MiB/s (24.4MB/s-48.9MB/s), io=572MiB (599MB), run=5008-5046msec 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 bdev_null0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 [2024-12-06 17:08:37.151095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 bdev_null1 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 bdev_null2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:49.475 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:49.476 { 00:41:49.476 "params": { 00:41:49.476 "name": "Nvme$subsystem", 00:41:49.476 "trtype": "$TEST_TRANSPORT", 00:41:49.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.476 "adrfam": "ipv4", 00:41:49.476 "trsvcid": "$NVMF_PORT", 00:41:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.476 "hdgst": ${hdgst:-false}, 00:41:49.476 "ddgst": ${ddgst:-false} 00:41:49.476 }, 00:41:49.476 "method": "bdev_nvme_attach_controller" 00:41:49.476 } 00:41:49.476 EOF 00:41:49.476 )") 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:49.476 { 00:41:49.476 "params": { 00:41:49.476 "name": "Nvme$subsystem", 00:41:49.476 "trtype": "$TEST_TRANSPORT", 00:41:49.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.476 "adrfam": "ipv4", 00:41:49.476 "trsvcid": "$NVMF_PORT", 00:41:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.476 "hdgst": ${hdgst:-false}, 00:41:49.476 "ddgst": ${ddgst:-false} 00:41:49.476 }, 00:41:49.476 "method": "bdev_nvme_attach_controller" 00:41:49.476 } 00:41:49.476 EOF 00:41:49.476 )") 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:49.476 { 00:41:49.476 "params": { 00:41:49.476 "name": "Nvme$subsystem", 00:41:49.476 "trtype": "$TEST_TRANSPORT", 00:41:49.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:49.476 "adrfam": "ipv4", 00:41:49.476 "trsvcid": "$NVMF_PORT", 00:41:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:49.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:49.476 "hdgst": ${hdgst:-false}, 00:41:49.476 "ddgst": ${ddgst:-false} 00:41:49.476 }, 00:41:49.476 "method": "bdev_nvme_attach_controller" 00:41:49.476 } 00:41:49.476 EOF 00:41:49.476 )") 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:49.476 "params": { 00:41:49.476 "name": "Nvme0", 00:41:49.476 "trtype": "tcp", 00:41:49.476 "traddr": "10.0.0.2", 00:41:49.476 "adrfam": "ipv4", 00:41:49.476 "trsvcid": "4420", 00:41:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:49.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:49.476 "hdgst": false, 00:41:49.476 "ddgst": false 00:41:49.476 }, 00:41:49.476 "method": "bdev_nvme_attach_controller" 00:41:49.476 },{ 00:41:49.476 "params": { 00:41:49.476 "name": "Nvme1", 00:41:49.476 "trtype": "tcp", 00:41:49.476 "traddr": "10.0.0.2", 00:41:49.476 "adrfam": "ipv4", 00:41:49.476 "trsvcid": "4420", 00:41:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:49.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:49.476 "hdgst": false, 00:41:49.476 "ddgst": false 00:41:49.476 }, 00:41:49.476 "method": "bdev_nvme_attach_controller" 00:41:49.476 },{ 00:41:49.476 "params": { 00:41:49.476 "name": "Nvme2", 00:41:49.476 "trtype": "tcp", 00:41:49.476 "traddr": "10.0.0.2", 00:41:49.476 "adrfam": "ipv4", 00:41:49.476 "trsvcid": "4420", 00:41:49.476 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:49.476 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:49.476 "hdgst": false, 00:41:49.476 "ddgst": false 00:41:49.476 }, 00:41:49.476 "method": "bdev_nvme_attach_controller" 00:41:49.476 }' 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:49.476 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:49.477 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:49.477 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:49.477 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:49.477 17:08:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:49.477 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:49.477 ... 00:41:49.477 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:49.477 ... 00:41:49.477 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:49.477 ... 00:41:49.477 fio-3.35 00:41:49.477 Starting 24 threads 00:42:01.696 00:42:01.696 filename0: (groupid=0, jobs=1): err= 0: pid=2639329: Fri Dec 6 17:08:48 2024 00:42:01.696 read: IOPS=669, BW=2679KiB/s (2743kB/s)(26.2MiB/10011msec) 00:42:01.696 slat (nsec): min=4004, max=95367, avg=20288.54, stdev=16067.65 00:42:01.696 clat (usec): min=19686, max=29659, avg=23717.10, stdev=838.13 00:42:01.696 lat (usec): min=19693, max=29671, avg=23737.39, stdev=836.26 00:42:01.696 clat percentiles (usec): 00:42:01.696 | 1.00th=[21890], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:42:01.696 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:42:01.696 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:42:01.696 | 99.00th=[26608], 99.50th=[28181], 99.90th=[29754], 99.95th=[29754], 00:42:01.696 | 99.99th=[29754] 00:42:01.696 bw ( KiB/s): min= 2560, max= 2816, per=4.07%, avg=2674.79, stdev=55.19, samples=19 00:42:01.696 iops : min= 640, max= 704, avg=668.68, stdev=13.82, samples=19 00:42:01.696 lat (msec) : 20=0.06%, 50=99.94% 00:42:01.696 cpu : usr=99.05%, sys=0.64%, ctx=14, majf=0, minf=20 00:42:01.696 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:01.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.696 filename0: (groupid=0, jobs=1): err= 0: pid=2639330: Fri Dec 6 17:08:48 2024 00:42:01.696 read: IOPS=673, BW=2694KiB/s (2758kB/s)(26.3MiB/10006msec) 00:42:01.696 slat (nsec): min=4139, max=94079, avg=23451.03, stdev=15345.67 00:42:01.696 clat (usec): min=12593, max=42579, avg=23554.26, stdev=1474.10 00:42:01.696 lat (usec): min=12599, max=42591, avg=23577.71, stdev=1474.97 00:42:01.696 clat percentiles (usec): 00:42:01.696 | 1.00th=[16909], 5.00th=[22414], 10.00th=[22676], 20.00th=[22938], 00:42:01.696 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:42:01.696 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:42:01.696 | 99.00th=[27395], 99.50th=[28443], 99.90th=[32375], 99.95th=[42730], 00:42:01.696 | 99.99th=[42730] 00:42:01.696 bw ( KiB/s): min= 2560, max= 2864, per=4.10%, avg=2689.11, stdev=97.18, samples=19 00:42:01.696 iops : min= 640, max= 716, avg=672.26, stdev=24.31, samples=19 00:42:01.696 lat (msec) : 20=2.27%, 50=97.73% 00:42:01.696 cpu : usr=98.96%, sys=0.72%, ctx=14, majf=0, minf=14 00:42:01.696 IO depths : 1=5.5%, 2=11.1%, 4=23.4%, 8=52.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:42:01.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 issued rwts: total=6738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.696 filename0: (groupid=0, jobs=1): err= 0: pid=2639331: Fri Dec 6 17:08:48 2024 00:42:01.696 read: IOPS=678, BW=2714KiB/s (2779kB/s)(26.5MiB/10007msec) 00:42:01.696 slat (nsec): min=4473, max=83426, avg=19165.93, stdev=13827.19 00:42:01.696 clat (usec): min=7848, max=42423, avg=23415.96, stdev=2692.25 00:42:01.696 lat (usec): min=7855, max=42436, avg=23435.12, stdev=2693.80 00:42:01.696 clat percentiles (usec): 00:42:01.696 | 1.00th=[13566], 5.00th=[17957], 10.00th=[22414], 20.00th=[22938], 00:42:01.696 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:42:01.696 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:42:01.696 | 99.00th=[32375], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:42:01.696 | 99.99th=[42206] 00:42:01.696 bw ( KiB/s): min= 2560, max= 3104, per=4.13%, avg=2709.85, stdev=130.94, samples=20 00:42:01.696 iops : min= 640, max= 776, avg=677.45, stdev=32.74, samples=20 00:42:01.696 lat (msec) : 10=0.35%, 20=5.61%, 50=94.04% 00:42:01.696 cpu : usr=98.81%, sys=0.89%, ctx=14, majf=0, minf=17 00:42:01.696 IO depths : 1=5.0%, 2=10.1%, 4=21.2%, 8=55.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:42:01.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 issued rwts: total=6790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.696 filename0: (groupid=0, jobs=1): err= 0: pid=2639332: Fri Dec 6 17:08:48 2024 00:42:01.696 read: IOPS=681, BW=2725KiB/s (2790kB/s)(26.6MiB/10013msec) 00:42:01.696 slat (nsec): min=5522, max=71081, avg=10067.08, stdev=7628.77 00:42:01.696 clat (usec): min=5216, max=42815, avg=23404.30, stdev=2580.09 00:42:01.696 lat (usec): min=5222, max=42827, avg=23414.37, stdev=2579.98 00:42:01.696 clat percentiles (usec): 00:42:01.696 | 1.00th=[10683], 5.00th=[21365], 10.00th=[22676], 20.00th=[23200], 00:42:01.696 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:42:01.696 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:42:01.696 | 99.00th=[27395], 99.50th=[31589], 99.90th=[42730], 99.95th=[42730], 00:42:01.696 | 99.99th=[42730] 00:42:01.696 bw ( KiB/s): min= 2560, max= 3016, per=4.14%, avg=2722.00, stdev=93.87, samples=20 00:42:01.696 iops : min= 640, max= 754, avg=680.50, stdev=23.47, samples=20 00:42:01.696 lat (msec) : 10=0.76%, 20=3.34%, 50=95.90% 00:42:01.696 cpu : usr=98.85%, sys=0.85%, ctx=14, majf=0, minf=39 00:42:01.696 IO depths : 1=5.6%, 2=11.5%, 4=23.7%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:01.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.696 issued rwts: total=6821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.696 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.696 filename0: (groupid=0, jobs=1): err= 0: pid=2639333: Fri Dec 6 17:08:48 2024 00:42:01.696 read: IOPS=679, BW=2716KiB/s (2781kB/s)(26.6MiB/10014msec) 00:42:01.696 slat (nsec): min=2947, max=81265, avg=14119.81, stdev=12032.51 00:42:01.696 clat (usec): min=13606, max=33097, avg=23444.13, stdev=1795.24 00:42:01.696 lat (usec): min=13612, max=33140, avg=23458.25, stdev=1796.56 00:42:01.696 clat percentiles (usec): 00:42:01.696 | 1.00th=[14615], 5.00th=[20841], 10.00th=[22676], 20.00th=[23200], 00:42:01.696 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:42:01.696 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:42:01.697 | 99.00th=[26870], 99.50th=[27919], 99.90th=[32900], 99.95th=[32900], 00:42:01.697 | 99.99th=[33162] 00:42:01.697 bw ( KiB/s): min= 2560, max= 3200, per=4.13%, avg=2712.42, stdev=130.46, samples=19 00:42:01.697 iops : min= 640, max= 800, avg=678.11, stdev=32.62, samples=19 00:42:01.697 lat (msec) : 20=4.57%, 50=95.43% 00:42:01.697 cpu : usr=98.82%, sys=0.88%, ctx=17, majf=0, minf=15 00:42:01.697 IO depths : 1=5.7%, 2=11.4%, 4=23.3%, 8=52.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=6800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename0: (groupid=0, jobs=1): err= 0: pid=2639334: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=680, BW=2721KiB/s (2786kB/s)(26.6MiB/10003msec) 00:42:01.697 slat (usec): min=5, max=212, avg=24.00, stdev=21.97 00:42:01.697 clat (usec): min=3299, max=53040, avg=23313.89, stdev=2764.61 00:42:01.697 lat (usec): min=3305, max=53059, avg=23337.90, stdev=2766.52 00:42:01.697 clat percentiles (usec): 00:42:01.697 | 1.00th=[14091], 5.00th=[17957], 10.00th=[22414], 20.00th=[22938], 00:42:01.697 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.697 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25297], 00:42:01.697 | 99.00th=[31851], 99.50th=[33817], 99.90th=[39584], 99.95th=[52691], 00:42:01.697 | 99.99th=[53216] 00:42:01.697 bw ( KiB/s): min= 2525, max= 2976, per=4.13%, avg=2712.68, stdev=85.23, samples=19 00:42:01.697 iops : min= 631, max= 744, avg=678.16, stdev=21.34, samples=19 00:42:01.697 lat (msec) : 4=0.15%, 10=0.24%, 20=6.53%, 50=93.02%, 100=0.07% 00:42:01.697 cpu : usr=98.69%, sys=0.78%, ctx=94, majf=0, minf=21 00:42:01.697 IO depths : 1=3.6%, 2=7.4%, 4=16.5%, 8=62.0%, 16=10.5%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=92.0%, 8=3.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=6804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename0: (groupid=0, jobs=1): err= 0: pid=2639335: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=733, BW=2933KiB/s (3003kB/s)(28.7MiB/10014msec) 00:42:01.697 slat (usec): min=4, max=210, avg=16.38, stdev=18.26 00:42:01.697 clat (usec): min=1884, max=44962, avg=21687.64, stdev=5146.17 00:42:01.697 lat (usec): min=1898, max=44971, avg=21704.03, stdev=5149.81 00:42:01.697 clat percentiles (usec): 00:42:01.697 | 1.00th=[ 7832], 5.00th=[13829], 10.00th=[15008], 20.00th=[17433], 00:42:01.697 | 30.00th=[19530], 40.00th=[22414], 50.00th=[23200], 60.00th=[23462], 00:42:01.697 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[29230], 00:42:01.697 | 99.00th=[38011], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:42:01.697 | 99.99th=[44827] 00:42:01.697 bw ( KiB/s): min= 2560, max= 3376, per=4.47%, avg=2934.40, stdev=214.62, samples=20 00:42:01.697 iops : min= 640, max= 844, avg=733.60, stdev=53.66, samples=20 00:42:01.697 lat (msec) : 2=0.08%, 4=0.27%, 10=1.58%, 20=29.94%, 50=68.13% 00:42:01.697 cpu : usr=98.34%, sys=1.14%, ctx=89, majf=0, minf=26 00:42:01.697 IO depths : 1=2.4%, 2=4.8%, 4=13.2%, 8=69.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=90.9%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=7342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename0: (groupid=0, jobs=1): err= 0: pid=2639336: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=669, BW=2677KiB/s (2741kB/s)(26.1MiB/10003msec) 00:42:01.697 slat (usec): min=5, max=139, avg=23.90, stdev=19.47 00:42:01.697 clat (usec): min=2969, max=38253, avg=23667.92, stdev=1767.81 00:42:01.697 lat (usec): min=2975, max=38275, avg=23691.82, stdev=1768.16 00:42:01.697 clat percentiles (usec): 00:42:01.697 | 1.00th=[21627], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:42:01.697 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:42:01.697 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:42:01.697 | 99.00th=[28181], 99.50th=[31327], 99.90th=[38011], 99.95th=[38011], 00:42:01.697 | 99.99th=[38011] 00:42:01.697 bw ( KiB/s): min= 2560, max= 2704, per=4.06%, avg=2663.84, stdev=48.41, samples=19 00:42:01.697 iops : min= 640, max= 676, avg=665.95, stdev=12.13, samples=19 00:42:01.697 lat (msec) : 4=0.03%, 10=0.45%, 20=0.30%, 50=99.22% 00:42:01.697 cpu : usr=98.66%, sys=0.85%, ctx=77, majf=0, minf=17 00:42:01.697 IO depths : 1=5.7%, 2=11.6%, 4=23.8%, 8=51.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=93.9%, 8=0.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename1: (groupid=0, jobs=1): err= 0: pid=2639337: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=685, BW=2741KiB/s (2807kB/s)(26.8MiB/10011msec) 00:42:01.697 slat (nsec): min=2984, max=98171, avg=18396.52, stdev=15033.75 00:42:01.697 clat (usec): min=9896, max=39149, avg=23206.36, stdev=2976.10 00:42:01.697 lat (usec): min=9902, max=39172, avg=23224.76, stdev=2978.64 00:42:01.697 clat percentiles (usec): 00:42:01.697 | 1.00th=[13435], 5.00th=[16712], 10.00th=[19530], 20.00th=[22938], 00:42:01.697 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:42:01.697 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25822], 00:42:01.697 | 99.00th=[33424], 99.50th=[34866], 99.90th=[38536], 99.95th=[39060], 00:42:01.697 | 99.99th=[39060] 00:42:01.697 bw ( KiB/s): min= 2560, max= 3056, per=4.17%, avg=2740.47, stdev=127.95, samples=19 00:42:01.697 iops : min= 640, max= 764, avg=685.11, stdev=31.99, samples=19 00:42:01.697 lat (msec) : 10=0.09%, 20=10.71%, 50=89.20% 00:42:01.697 cpu : usr=98.89%, sys=0.79%, ctx=14, majf=0, minf=21 00:42:01.697 IO depths : 1=4.6%, 2=9.2%, 4=20.2%, 8=57.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=92.8%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=6860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename1: (groupid=0, jobs=1): err= 0: pid=2639338: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=662, BW=2649KiB/s (2712kB/s)(25.9MiB/10004msec) 00:42:01.697 slat (usec): min=4, max=120, avg=28.15, stdev=16.74 00:42:01.697 clat (usec): min=14912, max=54334, avg=23905.67, stdev=2007.44 00:42:01.697 lat (usec): min=14921, max=54347, avg=23933.82, stdev=2005.86 00:42:01.697 clat percentiles (usec): 00:42:01.697 | 1.00th=[21890], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:42:01.697 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23725], 00:42:01.697 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:42:01.697 | 99.00th=[32637], 99.50th=[33817], 99.90th=[45876], 99.95th=[45876], 00:42:01.697 | 99.99th=[54264] 00:42:01.697 bw ( KiB/s): min= 2432, max= 2816, per=4.02%, avg=2640.84, stdev=106.33, samples=19 00:42:01.697 iops : min= 608, max= 704, avg=660.21, stdev=26.58, samples=19 00:42:01.697 lat (msec) : 20=0.45%, 50=99.53%, 100=0.02% 00:42:01.697 cpu : usr=98.84%, sys=0.77%, ctx=108, majf=0, minf=24 00:42:01.697 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename1: (groupid=0, jobs=1): err= 0: pid=2639339: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=672, BW=2692KiB/s (2756kB/s)(26.4MiB/10043msec) 00:42:01.697 slat (usec): min=4, max=129, avg=19.53, stdev=18.11 00:42:01.697 clat (usec): min=7896, max=48681, avg=23640.11, stdev=3675.38 00:42:01.697 lat (usec): min=7903, max=48688, avg=23659.64, stdev=3675.90 00:42:01.697 clat percentiles (usec): 00:42:01.697 | 1.00th=[14222], 5.00th=[17171], 10.00th=[19530], 20.00th=[22676], 00:42:01.697 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:42:01.697 | 70.00th=[24249], 80.00th=[24773], 90.00th=[26870], 95.00th=[29754], 00:42:01.697 | 99.00th=[38536], 99.50th=[40633], 99.90th=[44303], 99.95th=[48497], 00:42:01.697 | 99.99th=[48497] 00:42:01.697 bw ( KiB/s): min= 2560, max= 2864, per=4.09%, avg=2688.84, stdev=85.95, samples=19 00:42:01.697 iops : min= 640, max= 716, avg=672.21, stdev=21.49, samples=19 00:42:01.697 lat (msec) : 10=0.28%, 20=11.32%, 50=88.40% 00:42:01.697 cpu : usr=98.95%, sys=0.70%, ctx=71, majf=0, minf=28 00:42:01.697 IO depths : 1=1.0%, 2=2.3%, 4=7.5%, 8=75.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=90.0%, 8=6.9%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename1: (groupid=0, jobs=1): err= 0: pid=2639340: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=740, BW=2962KiB/s (3033kB/s)(29.0MiB/10012msec) 00:42:01.697 slat (nsec): min=4100, max=88165, avg=9258.19, stdev=7595.44 00:42:01.697 clat (usec): min=2343, max=43020, avg=21539.19, stdev=4401.58 00:42:01.697 lat (usec): min=2352, max=43026, avg=21548.45, stdev=4402.58 00:42:01.697 clat percentiles (usec): 00:42:01.697 | 1.00th=[ 5211], 5.00th=[13173], 10.00th=[14222], 20.00th=[18744], 00:42:01.697 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.697 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:42:01.697 | 99.00th=[25822], 99.50th=[26084], 99.90th=[32113], 99.95th=[43254], 00:42:01.697 | 99.99th=[43254] 00:42:01.697 bw ( KiB/s): min= 2560, max= 3872, per=4.51%, avg=2958.80, stdev=476.01, samples=20 00:42:01.697 iops : min= 640, max= 968, avg=739.70, stdev=119.00, samples=20 00:42:01.697 lat (msec) : 4=0.94%, 10=2.04%, 20=20.45%, 50=76.57% 00:42:01.697 cpu : usr=98.90%, sys=0.76%, ctx=46, majf=0, minf=21 00:42:01.697 IO depths : 1=4.4%, 2=8.9%, 4=19.8%, 8=58.6%, 16=8.2%, 32=0.0%, >=64=0.0% 00:42:01.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 complete : 0=0.0%, 4=92.6%, 8=1.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.697 issued rwts: total=7413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.697 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.697 filename1: (groupid=0, jobs=1): err= 0: pid=2639341: Fri Dec 6 17:08:48 2024 00:42:01.697 read: IOPS=679, BW=2718KiB/s (2783kB/s)(26.5MiB/10001msec) 00:42:01.698 slat (usec): min=4, max=124, avg=25.61, stdev=20.03 00:42:01.698 clat (usec): min=9014, max=45778, avg=23300.69, stdev=2736.68 00:42:01.698 lat (usec): min=9022, max=45791, avg=23326.30, stdev=2738.94 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[14484], 5.00th=[17695], 10.00th=[21890], 20.00th=[22938], 00:42:01.698 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.698 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:42:01.698 | 99.00th=[32375], 99.50th=[35390], 99.90th=[42206], 99.95th=[45876], 00:42:01.698 | 99.99th=[45876] 00:42:01.698 bw ( KiB/s): min= 2560, max= 3120, per=4.13%, avg=2713.53, stdev=142.83, samples=19 00:42:01.698 iops : min= 640, max= 780, avg=678.37, stdev=35.71, samples=19 00:42:01.698 lat (msec) : 10=0.09%, 20=7.71%, 50=92.20% 00:42:01.698 cpu : usr=98.69%, sys=0.91%, ctx=40, majf=0, minf=22 00:42:01.698 IO depths : 1=4.7%, 2=9.5%, 4=20.5%, 8=57.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:42:01.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 complete : 0=0.0%, 4=92.9%, 8=1.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 issued rwts: total=6796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.698 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.698 filename1: (groupid=0, jobs=1): err= 0: pid=2639342: Fri Dec 6 17:08:48 2024 00:42:01.698 read: IOPS=690, BW=2762KiB/s (2828kB/s)(27.0MiB/10017msec) 00:42:01.698 slat (usec): min=5, max=143, avg=18.07, stdev=16.42 00:42:01.698 clat (usec): min=7982, max=40905, avg=23015.75, stdev=2784.20 00:42:01.698 lat (usec): min=7988, max=40913, avg=23033.82, stdev=2785.76 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[12911], 5.00th=[16909], 10.00th=[20317], 20.00th=[22938], 00:42:01.698 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.698 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:42:01.698 | 99.00th=[29230], 99.50th=[33817], 99.90th=[39060], 99.95th=[40633], 00:42:01.698 | 99.99th=[41157] 00:42:01.698 bw ( KiB/s): min= 2560, max= 3080, per=4.20%, avg=2760.40, stdev=135.30, samples=20 00:42:01.698 iops : min= 640, max= 770, avg=690.10, stdev=33.83, samples=20 00:42:01.698 lat (msec) : 10=0.06%, 20=9.54%, 50=90.40% 00:42:01.698 cpu : usr=98.92%, sys=0.76%, ctx=16, majf=0, minf=18 00:42:01.698 IO depths : 1=5.0%, 2=10.2%, 4=21.5%, 8=55.7%, 16=7.6%, 32=0.0%, >=64=0.0% 00:42:01.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 issued rwts: total=6917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.698 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.698 filename1: (groupid=0, jobs=1): err= 0: pid=2639343: Fri Dec 6 17:08:48 2024 00:42:01.698 read: IOPS=694, BW=2778KiB/s (2844kB/s)(27.1MiB/10006msec) 00:42:01.698 slat (usec): min=4, max=114, avg=21.80, stdev=17.65 00:42:01.698 clat (usec): min=8296, max=39437, avg=22867.52, stdev=3606.41 00:42:01.698 lat (usec): min=8304, max=39443, avg=22889.32, stdev=3610.02 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[13304], 5.00th=[15139], 10.00th=[17695], 20.00th=[22152], 00:42:01.698 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.698 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[27657], 00:42:01.698 | 99.00th=[34341], 99.50th=[36963], 99.90th=[39060], 99.95th=[39584], 00:42:01.698 | 99.99th=[39584] 00:42:01.698 bw ( KiB/s): min= 2560, max= 3168, per=4.23%, avg=2777.26, stdev=161.53, samples=19 00:42:01.698 iops : min= 640, max= 792, avg=694.32, stdev=40.38, samples=19 00:42:01.698 lat (msec) : 10=0.17%, 20=15.53%, 50=84.30% 00:42:01.698 cpu : usr=98.89%, sys=0.77%, ctx=42, majf=0, minf=19 00:42:01.698 IO depths : 1=3.2%, 2=6.9%, 4=16.9%, 8=63.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:42:01.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 complete : 0=0.0%, 4=92.1%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 issued rwts: total=6948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.698 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.698 filename1: (groupid=0, jobs=1): err= 0: pid=2639344: Fri Dec 6 17:08:48 2024 00:42:01.698 read: IOPS=687, BW=2751KiB/s (2817kB/s)(26.9MiB/10017msec) 00:42:01.698 slat (nsec): min=5515, max=89039, avg=18867.42, stdev=14665.00 00:42:01.698 clat (usec): min=10578, max=40115, avg=23120.51, stdev=2574.86 00:42:01.698 lat (usec): min=10593, max=40123, avg=23139.37, stdev=2576.56 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[14484], 5.00th=[16581], 10.00th=[20317], 20.00th=[22938], 00:42:01.698 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23462], 60.00th=[23725], 00:42:01.698 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:42:01.698 | 99.00th=[29230], 99.50th=[32375], 99.90th=[39060], 99.95th=[40109], 00:42:01.698 | 99.99th=[40109] 00:42:01.698 bw ( KiB/s): min= 2560, max= 3232, per=4.19%, avg=2748.80, stdev=149.61, samples=20 00:42:01.698 iops : min= 640, max= 808, avg=687.20, stdev=37.40, samples=20 00:42:01.698 lat (msec) : 20=9.38%, 50=90.62% 00:42:01.698 cpu : usr=98.85%, sys=0.83%, ctx=14, majf=0, minf=23 00:42:01.698 IO depths : 1=5.1%, 2=10.2%, 4=21.5%, 8=55.7%, 16=7.5%, 32=0.0%, >=64=0.0% 00:42:01.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 issued rwts: total=6888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.698 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.698 filename2: (groupid=0, jobs=1): err= 0: pid=2639345: Fri Dec 6 17:08:48 2024 00:42:01.698 read: IOPS=685, BW=2740KiB/s (2806kB/s)(26.8MiB/10013msec) 00:42:01.698 slat (usec): min=4, max=110, avg=18.73, stdev=17.59 00:42:01.698 clat (usec): min=8593, max=40905, avg=23204.19, stdev=3787.32 00:42:01.698 lat (usec): min=8617, max=40928, avg=23222.91, stdev=3789.75 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[12780], 5.00th=[15533], 10.00th=[18482], 20.00th=[22414], 00:42:01.698 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.698 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25560], 95.00th=[29754], 00:42:01.698 | 99.00th=[36439], 99.50th=[38011], 99.90th=[39584], 99.95th=[40633], 00:42:01.698 | 99.99th=[41157] 00:42:01.698 bw ( KiB/s): min= 2560, max= 3136, per=4.17%, avg=2738.85, stdev=121.40, samples=20 00:42:01.698 iops : min= 640, max= 784, avg=684.70, stdev=30.35, samples=20 00:42:01.698 lat (msec) : 10=0.10%, 20=13.69%, 50=86.21% 00:42:01.698 cpu : usr=98.86%, sys=0.82%, ctx=14, majf=0, minf=20 00:42:01.698 IO depths : 1=2.6%, 2=5.8%, 4=14.6%, 8=66.1%, 16=10.9%, 32=0.0%, >=64=0.0% 00:42:01.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 complete : 0=0.0%, 4=91.5%, 8=3.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 issued rwts: total=6860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.698 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.698 filename2: (groupid=0, jobs=1): err= 0: pid=2639346: Fri Dec 6 17:08:48 2024 00:42:01.698 read: IOPS=678, BW=2716KiB/s (2781kB/s)(26.5MiB/10004msec) 00:42:01.698 slat (nsec): min=4220, max=91106, avg=13242.70, stdev=12619.42 00:42:01.698 clat (usec): min=6027, max=47676, avg=23501.16, stdev=3965.80 00:42:01.698 lat (usec): min=6033, max=47688, avg=23514.41, stdev=3966.33 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[12256], 5.00th=[16909], 10.00th=[18482], 20.00th=[21627], 00:42:01.698 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:42:01.698 | 70.00th=[24249], 80.00th=[24773], 90.00th=[27395], 95.00th=[30278], 00:42:01.698 | 99.00th=[36963], 99.50th=[40633], 99.90th=[47449], 99.95th=[47449], 00:42:01.698 | 99.99th=[47449] 00:42:01.698 bw ( KiB/s): min= 2448, max= 2848, per=4.12%, avg=2703.16, stdev=104.03, samples=19 00:42:01.698 iops : min= 612, max= 712, avg=675.79, stdev=26.01, samples=19 00:42:01.698 lat (msec) : 10=0.32%, 20=13.78%, 50=85.90% 00:42:01.698 cpu : usr=99.06%, sys=0.62%, ctx=15, majf=0, minf=23 00:42:01.698 IO depths : 1=0.3%, 2=0.7%, 4=3.9%, 8=79.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:42:01.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 complete : 0=0.0%, 4=89.2%, 8=8.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 issued rwts: total=6792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.698 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.698 filename2: (groupid=0, jobs=1): err= 0: pid=2639347: Fri Dec 6 17:08:48 2024 00:42:01.698 read: IOPS=692, BW=2770KiB/s (2836kB/s)(27.1MiB/10017msec) 00:42:01.698 slat (usec): min=5, max=100, avg=22.49, stdev=16.81 00:42:01.698 clat (usec): min=9608, max=42477, avg=22925.76, stdev=3331.98 00:42:01.698 lat (usec): min=9616, max=42540, avg=22948.25, stdev=3334.85 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[12780], 5.00th=[15664], 10.00th=[18220], 20.00th=[22676], 00:42:01.698 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.698 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25297], 00:42:01.698 | 99.00th=[34866], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:42:01.698 | 99.99th=[42730] 00:42:01.698 bw ( KiB/s): min= 2688, max= 2992, per=4.22%, avg=2768.00, stdev=84.66, samples=20 00:42:01.698 iops : min= 672, max= 748, avg=692.00, stdev=21.17, samples=20 00:42:01.698 lat (msec) : 10=0.06%, 20=12.85%, 50=87.10% 00:42:01.698 cpu : usr=98.83%, sys=0.85%, ctx=17, majf=0, minf=23 00:42:01.698 IO depths : 1=4.9%, 2=10.0%, 4=21.3%, 8=56.1%, 16=7.7%, 32=0.0%, >=64=0.0% 00:42:01.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.698 issued rwts: total=6936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.698 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.698 filename2: (groupid=0, jobs=1): err= 0: pid=2639348: Fri Dec 6 17:08:48 2024 00:42:01.698 read: IOPS=679, BW=2720KiB/s (2785kB/s)(26.6MiB/10003msec) 00:42:01.698 slat (nsec): min=4312, max=91920, avg=14978.02, stdev=13386.11 00:42:01.698 clat (usec): min=2983, max=47014, avg=23439.18, stdev=3383.75 00:42:01.698 lat (usec): min=2989, max=47026, avg=23454.16, stdev=3384.80 00:42:01.698 clat percentiles (usec): 00:42:01.698 | 1.00th=[12125], 5.00th=[17171], 10.00th=[21365], 20.00th=[22938], 00:42:01.698 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:42:01.698 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25035], 95.00th=[26346], 00:42:01.698 | 99.00th=[36963], 99.50th=[38536], 99.90th=[41681], 99.95th=[46924], 00:42:01.698 | 99.99th=[46924] 00:42:01.698 bw ( KiB/s): min= 2496, max= 2912, per=4.12%, avg=2708.21, stdev=92.39, samples=19 00:42:01.698 iops : min= 624, max= 728, avg=677.05, stdev=23.10, samples=19 00:42:01.698 lat (msec) : 4=0.15%, 10=0.29%, 20=8.07%, 50=91.49% 00:42:01.698 cpu : usr=98.92%, sys=0.78%, ctx=20, majf=0, minf=25 00:42:01.699 IO depths : 1=0.9%, 2=2.6%, 4=7.7%, 8=73.7%, 16=15.0%, 32=0.0%, >=64=0.0% 00:42:01.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 complete : 0=0.0%, 4=90.5%, 8=7.2%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 issued rwts: total=6802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.699 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.699 filename2: (groupid=0, jobs=1): err= 0: pid=2639349: Fri Dec 6 17:08:48 2024 00:42:01.699 read: IOPS=702, BW=2811KiB/s (2878kB/s)(27.5MiB/10015msec) 00:42:01.699 slat (usec): min=4, max=102, avg=17.69, stdev=15.65 00:42:01.699 clat (usec): min=8336, max=43797, avg=22638.07, stdev=3954.89 00:42:01.699 lat (usec): min=8381, max=43808, avg=22655.77, stdev=3957.96 00:42:01.699 clat percentiles (usec): 00:42:01.699 | 1.00th=[13173], 5.00th=[14877], 10.00th=[16712], 20.00th=[20055], 00:42:01.699 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.699 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25035], 95.00th=[28181], 00:42:01.699 | 99.00th=[36963], 99.50th=[38011], 99.90th=[39584], 99.95th=[43779], 00:42:01.699 | 99.99th=[43779] 00:42:01.699 bw ( KiB/s): min= 2656, max= 3040, per=4.27%, avg=2805.05, stdev=126.58, samples=19 00:42:01.699 iops : min= 664, max= 760, avg=701.26, stdev=31.65, samples=19 00:42:01.699 lat (msec) : 10=0.11%, 20=19.51%, 50=80.38% 00:42:01.699 cpu : usr=98.90%, sys=0.78%, ctx=15, majf=0, minf=21 00:42:01.699 IO depths : 1=1.9%, 2=4.0%, 4=13.0%, 8=69.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:42:01.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 complete : 0=0.0%, 4=91.1%, 8=4.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 issued rwts: total=7038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.699 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.699 filename2: (groupid=0, jobs=1): err= 0: pid=2639350: Fri Dec 6 17:08:48 2024 00:42:01.699 read: IOPS=669, BW=2679KiB/s (2744kB/s)(26.2MiB/10008msec) 00:42:01.699 slat (nsec): min=4108, max=83991, avg=15681.18, stdev=13003.32 00:42:01.699 clat (usec): min=14035, max=32917, avg=23758.49, stdev=903.47 00:42:01.699 lat (usec): min=14041, max=32922, avg=23774.17, stdev=902.24 00:42:01.699 clat percentiles (usec): 00:42:01.699 | 1.00th=[21890], 5.00th=[22676], 10.00th=[22938], 20.00th=[23200], 00:42:01.699 | 30.00th=[23462], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:42:01.699 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:42:01.699 | 99.00th=[26608], 99.50th=[27132], 99.90th=[30802], 99.95th=[30802], 00:42:01.699 | 99.99th=[32900] 00:42:01.699 bw ( KiB/s): min= 2560, max= 2816, per=4.07%, avg=2674.53, stdev=72.59, samples=19 00:42:01.699 iops : min= 640, max= 704, avg=668.63, stdev=18.15, samples=19 00:42:01.699 lat (msec) : 20=0.12%, 50=99.88% 00:42:01.699 cpu : usr=99.13%, sys=0.55%, ctx=19, majf=0, minf=17 00:42:01.699 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:42:01.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 issued rwts: total=6704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.699 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.699 filename2: (groupid=0, jobs=1): err= 0: pid=2639351: Fri Dec 6 17:08:48 2024 00:42:01.699 read: IOPS=683, BW=2732KiB/s (2798kB/s)(26.7MiB/10019msec) 00:42:01.699 slat (usec): min=4, max=148, avg=16.31, stdev=15.22 00:42:01.699 clat (usec): min=2263, max=37129, avg=23288.19, stdev=2968.03 00:42:01.699 lat (usec): min=2271, max=37160, avg=23304.50, stdev=2968.63 00:42:01.699 clat percentiles (usec): 00:42:01.699 | 1.00th=[ 4228], 5.00th=[21627], 10.00th=[22676], 20.00th=[22938], 00:42:01.699 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23725], 60.00th=[23987], 00:42:01.699 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:42:01.699 | 99.00th=[28181], 99.50th=[29754], 99.90th=[35914], 99.95th=[36963], 00:42:01.699 | 99.99th=[36963] 00:42:01.699 bw ( KiB/s): min= 2560, max= 3328, per=4.16%, avg=2731.20, stdev=151.89, samples=20 00:42:01.699 iops : min= 640, max= 832, avg=682.80, stdev=37.97, samples=20 00:42:01.699 lat (msec) : 4=0.92%, 10=0.63%, 20=2.66%, 50=95.79% 00:42:01.699 cpu : usr=98.99%, sys=0.71%, ctx=25, majf=0, minf=28 00:42:01.699 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:42:01.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 issued rwts: total=6844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.699 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.699 filename2: (groupid=0, jobs=1): err= 0: pid=2639352: Fri Dec 6 17:08:48 2024 00:42:01.699 read: IOPS=695, BW=2782KiB/s (2849kB/s)(27.2MiB/10008msec) 00:42:01.699 slat (nsec): min=4139, max=93692, avg=17456.60, stdev=15397.44 00:42:01.699 clat (usec): min=5224, max=43063, avg=22880.92, stdev=4249.56 00:42:01.699 lat (usec): min=5231, max=43076, avg=22898.37, stdev=4252.16 00:42:01.699 clat percentiles (usec): 00:42:01.699 | 1.00th=[12911], 5.00th=[15008], 10.00th=[16909], 20.00th=[20579], 00:42:01.699 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:42:01.699 | 70.00th=[23987], 80.00th=[24249], 90.00th=[25297], 95.00th=[29492], 00:42:01.699 | 99.00th=[37487], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:42:01.699 | 99.99th=[43254] 00:42:01.699 bw ( KiB/s): min= 2608, max= 3008, per=4.23%, avg=2779.60, stdev=110.90, samples=20 00:42:01.699 iops : min= 652, max= 752, avg=694.90, stdev=27.72, samples=20 00:42:01.699 lat (msec) : 10=0.49%, 20=18.00%, 50=81.51% 00:42:01.699 cpu : usr=98.83%, sys=0.86%, ctx=14, majf=0, minf=29 00:42:01.699 IO depths : 1=1.5%, 2=3.6%, 4=11.5%, 8=70.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:42:01.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 complete : 0=0.0%, 4=90.9%, 8=4.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:01.699 issued rwts: total=6961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:01.699 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:01.699 00:42:01.699 Run status group 0 (all jobs): 00:42:01.699 READ: bw=64.1MiB/s (67.2MB/s), 2649KiB/s-2962KiB/s (2712kB/s-3033kB/s), io=644MiB (675MB), run=10001-10043msec 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.699 bdev_null0 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.699 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.700 [2024-12-06 17:08:48.677504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.700 bdev_null1 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:01.700 { 00:42:01.700 "params": { 00:42:01.700 "name": "Nvme$subsystem", 00:42:01.700 "trtype": "$TEST_TRANSPORT", 00:42:01.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:01.700 "adrfam": "ipv4", 00:42:01.700 "trsvcid": "$NVMF_PORT", 00:42:01.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:01.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:01.700 "hdgst": ${hdgst:-false}, 00:42:01.700 "ddgst": ${ddgst:-false} 00:42:01.700 }, 00:42:01.700 "method": "bdev_nvme_attach_controller" 00:42:01.700 } 00:42:01.700 EOF 00:42:01.700 )") 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:01.700 { 00:42:01.700 "params": { 00:42:01.700 "name": "Nvme$subsystem", 00:42:01.700 "trtype": "$TEST_TRANSPORT", 00:42:01.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:01.700 "adrfam": "ipv4", 00:42:01.700 "trsvcid": "$NVMF_PORT", 00:42:01.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:01.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:01.700 "hdgst": ${hdgst:-false}, 00:42:01.700 "ddgst": ${ddgst:-false} 00:42:01.700 }, 00:42:01.700 "method": "bdev_nvme_attach_controller" 00:42:01.700 } 00:42:01.700 EOF 00:42:01.700 )") 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:01.700 "params": { 00:42:01.700 "name": "Nvme0", 00:42:01.700 "trtype": "tcp", 00:42:01.700 "traddr": "10.0.0.2", 00:42:01.700 "adrfam": "ipv4", 00:42:01.700 "trsvcid": "4420", 00:42:01.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:01.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:01.700 "hdgst": false, 00:42:01.700 "ddgst": false 00:42:01.700 }, 00:42:01.700 "method": "bdev_nvme_attach_controller" 00:42:01.700 },{ 00:42:01.700 "params": { 00:42:01.700 "name": "Nvme1", 00:42:01.700 "trtype": "tcp", 00:42:01.700 "traddr": "10.0.0.2", 00:42:01.700 "adrfam": "ipv4", 00:42:01.700 "trsvcid": "4420", 00:42:01.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:01.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:01.700 "hdgst": false, 00:42:01.700 "ddgst": false 00:42:01.700 }, 00:42:01.700 "method": "bdev_nvme_attach_controller" 00:42:01.700 }' 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:01.700 17:08:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:01.700 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:01.700 ... 00:42:01.700 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:01.700 ... 00:42:01.700 fio-3.35 00:42:01.700 Starting 4 threads 00:42:06.985 00:42:06.985 filename0: (groupid=0, jobs=1): err= 0: pid=2641842: Fri Dec 6 17:08:54 2024 00:42:06.985 read: IOPS=3004, BW=23.5MiB/s (24.6MB/s)(117MiB/5001msec) 00:42:06.985 slat (nsec): min=3144, max=47580, avg=8835.20, stdev=3453.78 00:42:06.985 clat (usec): min=875, max=5094, avg=2639.67, stdev=374.89 00:42:06.985 lat (usec): min=880, max=5104, avg=2648.50, stdev=374.74 00:42:06.985 clat percentiles (usec): 00:42:06.985 | 1.00th=[ 1778], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2343], 00:42:06.985 | 30.00th=[ 2474], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2704], 00:42:06.985 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 3032], 95.00th=[ 3392], 00:42:06.985 | 99.00th=[ 3687], 99.50th=[ 3785], 99.90th=[ 4228], 99.95th=[ 4359], 00:42:06.985 | 99.99th=[ 5080] 00:42:06.985 bw ( KiB/s): min=23008, max=24560, per=25.81%, avg=24065.67, stdev=453.77, samples=9 00:42:06.985 iops : min= 2876, max= 3070, avg=3008.11, stdev=56.77, samples=9 00:42:06.985 lat (usec) : 1000=0.02% 00:42:06.985 lat (msec) : 2=3.32%, 4=96.41%, 10=0.25% 00:42:06.985 cpu : usr=94.82%, sys=3.54%, ctx=196, majf=0, minf=9 00:42:06.985 IO depths : 1=0.1%, 2=0.8%, 4=69.4%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 issued rwts: total=15025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.985 filename0: (groupid=0, jobs=1): err= 0: pid=2641843: Fri Dec 6 17:08:54 2024 00:42:06.985 read: IOPS=2873, BW=22.5MiB/s (23.5MB/s)(112MiB/5002msec) 00:42:06.985 slat (nsec): min=2976, max=46832, avg=6286.98, stdev=2109.91 00:42:06.985 clat (usec): min=1077, max=4988, avg=2766.17, stdev=290.74 00:42:06.985 lat (usec): min=1083, max=4993, avg=2772.45, stdev=290.69 00:42:06.985 clat percentiles (usec): 00:42:06.985 | 1.00th=[ 2089], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2606], 00:42:06.985 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:42:06.985 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3228], 00:42:06.985 | 99.00th=[ 3916], 99.50th=[ 4178], 99.90th=[ 4621], 99.95th=[ 4686], 00:42:06.985 | 99.99th=[ 5014] 00:42:06.985 bw ( KiB/s): min=22752, max=23136, per=24.64%, avg=22977.78, stdev=148.26, samples=9 00:42:06.985 iops : min= 2844, max= 2892, avg=2872.22, stdev=18.53, samples=9 00:42:06.985 lat (msec) : 2=0.52%, 4=98.63%, 10=0.85% 00:42:06.985 cpu : usr=97.02%, sys=2.70%, ctx=6, majf=0, minf=9 00:42:06.985 IO depths : 1=0.1%, 2=0.2%, 4=72.8%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 issued rwts: total=14374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.985 filename1: (groupid=0, jobs=1): err= 0: pid=2641844: Fri Dec 6 17:08:54 2024 00:42:06.985 read: IOPS=2893, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:42:06.985 slat (nsec): min=3036, max=43184, avg=6593.58, stdev=2306.88 00:42:06.985 clat (usec): min=1289, max=5068, avg=2747.12, stdev=315.60 00:42:06.985 lat (usec): min=1295, max=5074, avg=2753.71, stdev=315.60 00:42:06.985 clat percentiles (usec): 00:42:06.985 | 1.00th=[ 2008], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2573], 00:42:06.985 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:42:06.985 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3294], 00:42:06.985 | 99.00th=[ 3851], 99.50th=[ 4178], 99.90th=[ 4424], 99.95th=[ 4686], 00:42:06.985 | 99.99th=[ 5080] 00:42:06.985 bw ( KiB/s): min=22880, max=23888, per=24.81%, avg=23132.22, stdev=302.42, samples=9 00:42:06.985 iops : min= 2860, max= 2986, avg=2891.44, stdev=37.81, samples=9 00:42:06.985 lat (msec) : 2=0.93%, 4=98.40%, 10=0.68% 00:42:06.985 cpu : usr=96.70%, sys=3.04%, ctx=6, majf=0, minf=0 00:42:06.985 IO depths : 1=0.1%, 2=0.5%, 4=71.2%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 issued rwts: total=14470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.985 filename1: (groupid=0, jobs=1): err= 0: pid=2641845: Fri Dec 6 17:08:54 2024 00:42:06.985 read: IOPS=2884, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:42:06.985 slat (nsec): min=2985, max=42299, avg=6415.27, stdev=2314.37 00:42:06.985 clat (usec): min=1202, max=4679, avg=2755.59, stdev=296.16 00:42:06.985 lat (usec): min=1209, max=4684, avg=2762.00, stdev=296.12 00:42:06.985 clat percentiles (usec): 00:42:06.985 | 1.00th=[ 2040], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2573], 00:42:06.985 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:42:06.985 | 70.00th=[ 2769], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3261], 00:42:06.985 | 99.00th=[ 3785], 99.50th=[ 4015], 99.90th=[ 4490], 99.95th=[ 4555], 00:42:06.985 | 99.99th=[ 4686] 00:42:06.985 bw ( KiB/s): min=22688, max=23248, per=24.74%, avg=23063.11, stdev=172.90, samples=9 00:42:06.985 iops : min= 2836, max= 2906, avg=2882.89, stdev=21.61, samples=9 00:42:06.985 lat (msec) : 2=0.69%, 4=98.80%, 10=0.51% 00:42:06.985 cpu : usr=96.46%, sys=3.26%, ctx=6, majf=0, minf=0 00:42:06.985 IO depths : 1=0.1%, 2=0.4%, 4=71.9%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.985 issued rwts: total=14429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.985 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.985 00:42:06.985 Run status group 0 (all jobs): 00:42:06.985 READ: bw=91.1MiB/s (95.5MB/s), 22.5MiB/s-23.5MiB/s (23.5MB/s-24.6MB/s), io=455MiB (478MB), run=5001-5002msec 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.985 00:42:06.985 real 0m23.803s 00:42:06.985 user 5m6.335s 00:42:06.985 sys 0m4.062s 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.985 17:08:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:06.985 ************************************ 00:42:06.985 END TEST fio_dif_rand_params 00:42:06.985 ************************************ 00:42:06.985 17:08:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:06.985 17:08:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:06.985 17:08:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:06.985 17:08:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:06.985 ************************************ 00:42:06.985 START TEST fio_dif_digest 00:42:06.985 ************************************ 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:06.985 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.986 bdev_null0 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.986 [2024-12-06 17:08:54.964998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:06.986 { 00:42:06.986 "params": { 00:42:06.986 "name": "Nvme$subsystem", 00:42:06.986 "trtype": "$TEST_TRANSPORT", 00:42:06.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:06.986 "adrfam": "ipv4", 00:42:06.986 "trsvcid": "$NVMF_PORT", 00:42:06.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:06.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:06.986 "hdgst": ${hdgst:-false}, 00:42:06.986 "ddgst": ${ddgst:-false} 00:42:06.986 }, 00:42:06.986 "method": "bdev_nvme_attach_controller" 00:42:06.986 } 00:42:06.986 EOF 00:42:06.986 )") 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:06.986 "params": { 00:42:06.986 "name": "Nvme0", 00:42:06.986 "trtype": "tcp", 00:42:06.986 "traddr": "10.0.0.2", 00:42:06.986 "adrfam": "ipv4", 00:42:06.986 "trsvcid": "4420", 00:42:06.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:06.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:06.986 "hdgst": true, 00:42:06.986 "ddgst": true 00:42:06.986 }, 00:42:06.986 "method": "bdev_nvme_attach_controller" 00:42:06.986 }' 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:06.986 17:08:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:06.986 17:08:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:06.986 17:08:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:06.986 17:08:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:06.986 17:08:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:06.986 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:06.986 ... 00:42:06.986 fio-3.35 00:42:06.986 Starting 3 threads 00:42:19.188 00:42:19.188 filename0: (groupid=0, jobs=1): err= 0: pid=2643358: Fri Dec 6 17:09:05 2024 00:42:19.188 read: IOPS=302, BW=37.9MiB/s (39.7MB/s)(380MiB/10044msec) 00:42:19.188 slat (nsec): min=4415, max=79663, avg=7201.85, stdev=2055.21 00:42:19.188 clat (usec): min=6359, max=51262, avg=9879.40, stdev=1357.41 00:42:19.188 lat (usec): min=6366, max=51272, avg=9886.60, stdev=1357.48 00:42:19.188 clat percentiles (usec): 00:42:19.188 | 1.00th=[ 7701], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9110], 00:42:19.188 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:42:19.188 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:42:19.188 | 99.00th=[11994], 99.50th=[12256], 99.90th=[14091], 99.95th=[48497], 00:42:19.188 | 99.99th=[51119] 00:42:19.188 bw ( KiB/s): min=37888, max=39936, per=34.30%, avg=38924.80, stdev=601.66, samples=20 00:42:19.188 iops : min= 296, max= 312, avg=304.10, stdev= 4.70, samples=20 00:42:19.188 lat (msec) : 10=56.03%, 20=43.90%, 50=0.03%, 100=0.03% 00:42:19.188 cpu : usr=95.12%, sys=4.62%, ctx=17, majf=0, minf=140 00:42:19.188 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.188 issued rwts: total=3043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.188 filename0: (groupid=0, jobs=1): err= 0: pid=2643359: Fri Dec 6 17:09:05 2024 00:42:19.188 read: IOPS=297, BW=37.2MiB/s (39.0MB/s)(374MiB/10045msec) 00:42:19.188 slat (nsec): min=4306, max=80374, avg=6992.90, stdev=1830.30 00:42:19.188 clat (usec): min=6905, max=49912, avg=10062.82, stdev=1378.42 00:42:19.188 lat (usec): min=6914, max=49920, avg=10069.81, stdev=1378.39 00:42:19.188 clat percentiles (usec): 00:42:19.188 | 1.00th=[ 7439], 5.00th=[ 8356], 10.00th=[ 8848], 20.00th=[ 9372], 00:42:19.188 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:42:19.188 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11207], 95.00th=[11600], 00:42:19.188 | 99.00th=[12387], 99.50th=[12780], 99.90th=[15664], 99.95th=[45876], 00:42:19.188 | 99.99th=[50070] 00:42:19.188 bw ( KiB/s): min=36096, max=39936, per=33.68%, avg=38220.80, stdev=983.10, samples=20 00:42:19.188 iops : min= 282, max= 312, avg=298.60, stdev= 7.68, samples=20 00:42:19.188 lat (msec) : 10=47.69%, 20=52.24%, 50=0.07% 00:42:19.188 cpu : usr=95.57%, sys=4.17%, ctx=31, majf=0, minf=193 00:42:19.188 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.188 issued rwts: total=2988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.188 filename0: (groupid=0, jobs=1): err= 0: pid=2643360: Fri Dec 6 17:09:05 2024 00:42:19.188 read: IOPS=286, BW=35.8MiB/s (37.5MB/s)(359MiB/10045msec) 00:42:19.188 slat (nsec): min=5457, max=74695, avg=6706.27, stdev=1586.03 00:42:19.188 clat (usec): min=6918, max=51300, avg=10462.21, stdev=1897.83 00:42:19.188 lat (usec): min=6924, max=51307, avg=10468.92, stdev=1897.87 00:42:19.188 clat percentiles (usec): 00:42:19.188 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:42:19.188 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:42:19.188 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:42:19.188 | 99.00th=[12780], 99.50th=[13435], 99.90th=[51119], 99.95th=[51119], 00:42:19.188 | 99.99th=[51119] 00:42:19.188 bw ( KiB/s): min=33536, max=38144, per=32.40%, avg=36761.60, stdev=892.24, samples=20 00:42:19.188 iops : min= 262, max= 298, avg=287.20, stdev= 6.97, samples=20 00:42:19.188 lat (msec) : 10=32.29%, 20=67.54%, 50=0.07%, 100=0.10% 00:42:19.188 cpu : usr=95.65%, sys=4.09%, ctx=20, majf=0, minf=151 00:42:19.188 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.188 issued rwts: total=2874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:19.188 00:42:19.188 Run status group 0 (all jobs): 00:42:19.188 READ: bw=111MiB/s (116MB/s), 35.8MiB/s-37.9MiB/s (37.5MB/s-39.7MB/s), io=1113MiB (1167MB), run=10044-10045msec 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.188 17:09:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.188 00:42:19.188 real 0m11.065s 00:42:19.188 user 0m41.209s 00:42:19.188 sys 0m1.557s 00:42:19.188 17:09:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:19.188 17:09:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.188 ************************************ 00:42:19.188 END TEST fio_dif_digest 00:42:19.188 ************************************ 00:42:19.188 17:09:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:19.188 17:09:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:19.188 rmmod nvme_tcp 00:42:19.188 rmmod nvme_fabrics 00:42:19.188 rmmod nvme_keyring 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2632102 ']' 00:42:19.188 17:09:06 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2632102 00:42:19.188 17:09:06 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2632102 ']' 00:42:19.188 17:09:06 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2632102 00:42:19.188 17:09:06 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:19.188 17:09:06 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:19.188 17:09:06 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2632102 00:42:19.188 17:09:06 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:19.188 17:09:06 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:19.189 17:09:06 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2632102' 00:42:19.189 killing process with pid 2632102 00:42:19.189 17:09:06 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2632102 00:42:19.189 17:09:06 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2632102 00:42:19.189 17:09:06 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:19.189 17:09:06 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:19.756 Waiting for block devices as requested 00:42:19.756 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:20.015 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:20.015 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:20.015 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:20.015 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:20.273 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:20.273 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:20.273 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:20.273 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:20.533 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:20.533 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:20.533 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:20.792 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:20.792 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:20.792 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:20.792 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:20.792 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:21.360 17:09:09 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:21.360 17:09:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:21.360 17:09:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.366 17:09:11 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:23.366 00:42:23.366 real 1m11.949s 00:42:23.366 user 7m43.190s 00:42:23.366 sys 0m17.642s 00:42:23.366 17:09:11 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:23.366 17:09:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:23.366 ************************************ 00:42:23.366 END TEST nvmf_dif 00:42:23.366 ************************************ 00:42:23.366 17:09:11 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:23.366 17:09:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:23.366 17:09:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:23.366 17:09:11 -- common/autotest_common.sh@10 -- # set +x 00:42:23.366 ************************************ 00:42:23.366 START TEST nvmf_abort_qd_sizes 00:42:23.366 ************************************ 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:23.366 * Looking for test storage... 00:42:23.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:23.366 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:23.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.367 --rc genhtml_branch_coverage=1 00:42:23.367 --rc genhtml_function_coverage=1 00:42:23.367 --rc genhtml_legend=1 00:42:23.367 --rc geninfo_all_blocks=1 00:42:23.367 --rc geninfo_unexecuted_blocks=1 00:42:23.367 00:42:23.367 ' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:23.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.367 --rc genhtml_branch_coverage=1 00:42:23.367 --rc genhtml_function_coverage=1 00:42:23.367 --rc genhtml_legend=1 00:42:23.367 --rc geninfo_all_blocks=1 00:42:23.367 --rc geninfo_unexecuted_blocks=1 00:42:23.367 00:42:23.367 ' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:23.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.367 --rc genhtml_branch_coverage=1 00:42:23.367 --rc genhtml_function_coverage=1 00:42:23.367 --rc genhtml_legend=1 00:42:23.367 --rc geninfo_all_blocks=1 00:42:23.367 --rc geninfo_unexecuted_blocks=1 00:42:23.367 00:42:23.367 ' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:23.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.367 --rc genhtml_branch_coverage=1 00:42:23.367 --rc genhtml_function_coverage=1 00:42:23.367 --rc genhtml_legend=1 00:42:23.367 --rc geninfo_all_blocks=1 00:42:23.367 --rc geninfo_unexecuted_blocks=1 00:42:23.367 00:42:23.367 ' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:23.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:23.367 17:09:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:42:28.638 Found 0000:31:00.0 (0x8086 - 0x159b) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:42:28.638 Found 0000:31:00.1 (0x8086 - 0x159b) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:42:28.638 Found net devices under 0000:31:00.0: cvl_0_0 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.638 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:42:28.639 Found net devices under 0000:31:00.1: cvl_0_1 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:28.639 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:28.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:28.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:42:28.898 00:42:28.898 --- 10.0.0.2 ping statistics --- 00:42:28.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.898 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:28.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:28.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:42:28.898 00:42:28.898 --- 10.0.0.1 ping statistics --- 00:42:28.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:28.898 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:28.898 17:09:17 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:31.433 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:31.433 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2653624 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2653624 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2653624 ']' 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:31.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:31.693 17:09:20 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:31.693 [2024-12-06 17:09:20.324466] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:42:31.693 [2024-12-06 17:09:20.324514] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:31.953 [2024-12-06 17:09:20.409016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:31.953 [2024-12-06 17:09:20.428678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:31.953 [2024-12-06 17:09:20.428712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:31.953 [2024-12-06 17:09:20.428721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:31.953 [2024-12-06 17:09:20.428728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:31.953 [2024-12-06 17:09:20.428734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:31.953 [2024-12-06 17:09:20.430236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:31.953 [2024-12-06 17:09:20.430341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:31.953 [2024-12-06 17:09:20.430499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.953 [2024-12-06 17:09:20.430500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:32.522 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:32.523 17:09:21 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:42:32.523 17:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:32.523 17:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:42:32.523 17:09:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:32.523 17:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:32.523 17:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:32.523 17:09:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:32.523 ************************************ 00:42:32.523 START TEST spdk_target_abort 00:42:32.523 ************************************ 00:42:32.523 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:32.523 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:32.523 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:42:32.523 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.523 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:32.783 spdk_targetn1 00:42:32.783 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.783 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:32.783 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.783 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:32.783 [2024-12-06 17:09:21.471595] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:33.044 [2024-12-06 17:09:21.511876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:33.044 17:09:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:33.044 [2024-12-06 17:09:21.632924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:160 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:42:33.044 [2024-12-06 17:09:21.632951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0015 p:1 m:0 dnr:0 00:42:33.044 [2024-12-06 17:09:21.655840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1264 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:42:33.044 [2024-12-06 17:09:21.655860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00a0 p:1 m:0 dnr:0 00:42:33.044 [2024-12-06 17:09:21.696254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3416 len:8 PRP1 0x200004abe000 PRP2 0x0 00:42:33.044 [2024-12-06 17:09:21.696274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00ad p:0 m:0 dnr:0 00:42:36.333 Initializing NVMe Controllers 00:42:36.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:36.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:36.333 Initialization complete. Launching workers. 00:42:36.333 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16721, failed: 3 00:42:36.333 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2922, failed to submit 13802 00:42:36.333 success 634, unsuccessful 2288, failed 0 00:42:36.333 17:09:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:36.333 17:09:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:36.333 [2024-12-06 17:09:24.911928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:296 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:42:36.333 [2024-12-06 17:09:24.911965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:42:36.333 [2024-12-06 17:09:24.943821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1032 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:42:36.333 [2024-12-06 17:09:24.943842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:42:36.333 [2024-12-06 17:09:24.975900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:1784 len:8 PRP1 0x200004e56000 PRP2 0x0 00:42:36.333 [2024-12-06 17:09:24.975920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:00e1 p:1 m:0 dnr:0 00:42:36.333 [2024-12-06 17:09:25.007848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:2448 len:8 PRP1 0x200004e3e000 PRP2 0x0 00:42:36.333 [2024-12-06 17:09:25.007867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:42:36.333 [2024-12-06 17:09:25.023906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:2848 len:8 PRP1 0x200004e46000 PRP2 0x0 00:42:36.333 [2024-12-06 17:09:25.023925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:42:36.591 [2024-12-06 17:09:25.047785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:3376 len:8 PRP1 0x200004e54000 PRP2 0x0 00:42:36.591 [2024-12-06 17:09:25.047805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00b5 p:0 m:0 dnr:0 00:42:39.880 Initializing NVMe Controllers 00:42:39.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:39.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:39.880 Initialization complete. Launching workers. 00:42:39.880 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8592, failed: 6 00:42:39.880 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7337 00:42:39.880 success 358, unsuccessful 903, failed 0 00:42:39.880 17:09:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:39.880 17:09:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:41.259 [2024-12-06 17:09:29.582912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:173 nsid:1 lba:157912 len:8 PRP1 0x200004ae2000 PRP2 0x0 00:42:41.259 [2024-12-06 17:09:29.582959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:173 cdw0:0 sqhd:0075 p:1 m:0 dnr:0 00:42:42.636 Initializing NVMe Controllers 00:42:42.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:42.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:42.636 Initialization complete. Launching workers. 00:42:42.636 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43851, failed: 1 00:42:42.636 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2734, failed to submit 41118 00:42:42.636 success 603, unsuccessful 2131, failed 0 00:42:42.636 17:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:42.636 17:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.636 17:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:42.636 17:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.636 17:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:42.636 17:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.636 17:09:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2653624 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2653624 ']' 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2653624 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2653624 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2653624' 00:42:44.545 killing process with pid 2653624 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2653624 00:42:44.545 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2653624 00:42:44.805 00:42:44.805 real 0m12.079s 00:42:44.805 user 0m49.083s 00:42:44.805 sys 0m1.865s 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.805 ************************************ 00:42:44.805 END TEST spdk_target_abort 00:42:44.805 ************************************ 00:42:44.805 17:09:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:44.805 17:09:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:44.805 17:09:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:44.805 17:09:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:44.805 ************************************ 00:42:44.805 START TEST kernel_target_abort 00:42:44.805 ************************************ 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:44.805 17:09:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:47.344 Waiting for block devices as requested 00:42:47.344 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:47.344 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:47.602 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:47.602 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:47.602 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:47.861 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:47.861 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:47.861 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:47.861 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:47.861 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:48.428 No valid GPT data, bailing 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:42:48.428 00:42:48.428 Discovery Log Number of Records 2, Generation counter 2 00:42:48.428 =====Discovery Log Entry 0====== 00:42:48.428 trtype: tcp 00:42:48.428 adrfam: ipv4 00:42:48.428 subtype: current discovery subsystem 00:42:48.428 treq: not specified, sq flow control disable supported 00:42:48.428 portid: 1 00:42:48.428 trsvcid: 4420 00:42:48.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:48.428 traddr: 10.0.0.1 00:42:48.428 eflags: none 00:42:48.428 sectype: none 00:42:48.428 =====Discovery Log Entry 1====== 00:42:48.428 trtype: tcp 00:42:48.428 adrfam: ipv4 00:42:48.428 subtype: nvme subsystem 00:42:48.428 treq: not specified, sq flow control disable supported 00:42:48.428 portid: 1 00:42:48.428 trsvcid: 4420 00:42:48.428 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:48.428 traddr: 10.0.0.1 00:42:48.428 eflags: none 00:42:48.428 sectype: none 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:48.428 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:48.429 17:09:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:51.714 Initializing NVMe Controllers 00:42:51.714 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:51.714 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:51.714 Initialization complete. Launching workers. 00:42:51.714 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94923, failed: 0 00:42:51.714 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94923, failed to submit 0 00:42:51.714 success 0, unsuccessful 94923, failed 0 00:42:51.714 17:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:51.714 17:09:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:55.006 Initializing NVMe Controllers 00:42:55.006 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:55.006 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:55.006 Initialization complete. Launching workers. 00:42:55.006 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 155454, failed: 0 00:42:55.006 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39118, failed to submit 116336 00:42:55.006 success 0, unsuccessful 39118, failed 0 00:42:55.006 17:09:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:55.006 17:09:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:58.291 Initializing NVMe Controllers 00:42:58.291 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:58.291 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:58.291 Initialization complete. Launching workers. 00:42:58.291 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146101, failed: 0 00:42:58.291 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36570, failed to submit 109531 00:42:58.291 success 0, unsuccessful 36570, failed 0 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:42:58.291 17:09:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:00.195 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:00.195 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:02.099 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:43:02.099 00:43:02.099 real 0m17.444s 00:43:02.099 user 0m8.734s 00:43:02.099 sys 0m4.363s 00:43:02.099 17:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:02.099 17:09:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:02.099 ************************************ 00:43:02.099 END TEST kernel_target_abort 00:43:02.099 ************************************ 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:02.099 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:02.099 rmmod nvme_tcp 00:43:02.099 rmmod nvme_fabrics 00:43:02.358 rmmod nvme_keyring 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2653624 ']' 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2653624 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2653624 ']' 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2653624 00:43:02.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2653624) - No such process 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2653624 is not found' 00:43:02.358 Process with pid 2653624 is not found 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:02.358 17:09:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:04.897 Waiting for block devices as requested 00:43:04.897 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:04.897 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:43:05.156 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:05.156 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:05.156 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:05.415 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:05.415 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:05.415 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:05.415 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:05.415 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:05.984 17:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:07.898 17:09:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:07.899 00:43:07.899 real 0m44.586s 00:43:07.899 user 1m1.409s 00:43:07.899 sys 0m14.257s 00:43:07.899 17:09:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:07.899 17:09:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:07.899 ************************************ 00:43:07.899 END TEST nvmf_abort_qd_sizes 00:43:07.899 ************************************ 00:43:07.899 17:09:56 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:07.899 17:09:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:07.899 17:09:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:07.899 17:09:56 -- common/autotest_common.sh@10 -- # set +x 00:43:07.899 ************************************ 00:43:07.899 START TEST keyring_file 00:43:07.899 ************************************ 00:43:07.899 17:09:56 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:07.899 * Looking for test storage... 00:43:07.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:07.899 17:09:56 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:07.899 17:09:56 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:43:07.899 17:09:56 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:08.159 17:09:56 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:08.159 17:09:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:08.159 17:09:56 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:08.159 17:09:56 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.159 --rc genhtml_branch_coverage=1 00:43:08.159 --rc genhtml_function_coverage=1 00:43:08.159 --rc genhtml_legend=1 00:43:08.159 --rc geninfo_all_blocks=1 00:43:08.159 --rc geninfo_unexecuted_blocks=1 00:43:08.159 00:43:08.159 ' 00:43:08.159 17:09:56 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.159 --rc genhtml_branch_coverage=1 00:43:08.159 --rc genhtml_function_coverage=1 00:43:08.159 --rc genhtml_legend=1 00:43:08.159 --rc geninfo_all_blocks=1 00:43:08.159 --rc geninfo_unexecuted_blocks=1 00:43:08.159 00:43:08.159 ' 00:43:08.159 17:09:56 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.159 --rc genhtml_branch_coverage=1 00:43:08.159 --rc genhtml_function_coverage=1 00:43:08.159 --rc genhtml_legend=1 00:43:08.159 --rc geninfo_all_blocks=1 00:43:08.159 --rc geninfo_unexecuted_blocks=1 00:43:08.159 00:43:08.159 ' 00:43:08.159 17:09:56 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:08.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:08.159 --rc genhtml_branch_coverage=1 00:43:08.159 --rc genhtml_function_coverage=1 00:43:08.159 --rc genhtml_legend=1 00:43:08.159 --rc geninfo_all_blocks=1 00:43:08.159 --rc geninfo_unexecuted_blocks=1 00:43:08.159 00:43:08.159 ' 00:43:08.159 17:09:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:08.159 17:09:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:08.159 17:09:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:08.160 17:09:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:08.160 17:09:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:08.160 17:09:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:08.160 17:09:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:08.160 17:09:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.160 17:09:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.160 17:09:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.160 17:09:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:08.160 17:09:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:08.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.P98UB2FcZt 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P98UB2FcZt 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.P98UB2FcZt 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.P98UB2FcZt 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5jDJYPLmTO 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:08.160 17:09:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5jDJYPLmTO 00:43:08.160 17:09:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5jDJYPLmTO 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5jDJYPLmTO 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=2664098 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2664098 00:43:08.160 17:09:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2664098 ']' 00:43:08.160 17:09:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:08.160 17:09:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:08.160 17:09:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:08.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:08.160 17:09:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:08.160 17:09:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.160 17:09:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:08.160 [2024-12-06 17:09:56.750228] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:43:08.160 [2024-12-06 17:09:56.750304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664098 ] 00:43:08.160 [2024-12-06 17:09:56.821668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.160 [2024-12-06 17:09:56.844661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:08.420 17:09:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:08.420 17:09:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:08.420 17:09:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:08.420 17:09:56 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.420 17:09:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.420 [2024-12-06 17:09:57.002720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:08.420 null0 00:43:08.420 [2024-12-06 17:09:57.034777] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:08.420 [2024-12-06 17:09:57.035141] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.420 17:09:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.420 [2024-12-06 17:09:57.062838] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:08.420 request: 00:43:08.420 { 00:43:08.420 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:08.420 "secure_channel": false, 00:43:08.420 "listen_address": { 00:43:08.420 "trtype": "tcp", 00:43:08.420 "traddr": "127.0.0.1", 00:43:08.420 "trsvcid": "4420" 00:43:08.420 }, 00:43:08.420 "method": "nvmf_subsystem_add_listener", 00:43:08.420 "req_id": 1 00:43:08.420 } 00:43:08.420 Got JSON-RPC error response 00:43:08.420 response: 00:43:08.420 { 00:43:08.420 "code": -32602, 00:43:08.420 "message": "Invalid parameters" 00:43:08.420 } 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:08.420 17:09:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=2664267 00:43:08.420 17:09:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2664267 /var/tmp/bperf.sock 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2664267 ']' 00:43:08.420 17:09:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:08.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:08.420 17:09:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:08.420 [2024-12-06 17:09:57.100279] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:43:08.420 [2024-12-06 17:09:57.100330] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2664267 ] 00:43:08.679 [2024-12-06 17:09:57.177106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:08.679 [2024-12-06 17:09:57.195294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:08.679 17:09:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:08.679 17:09:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:08.679 17:09:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:08.679 17:09:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:08.938 17:09:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5jDJYPLmTO 00:43:08.938 17:09:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5jDJYPLmTO 00:43:08.938 17:09:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:08.939 17:09:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:08.939 17:09:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:08.939 17:09:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:08.939 17:09:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.198 17:09:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.P98UB2FcZt == \/\t\m\p\/\t\m\p\.\P\9\8\U\B\2\F\c\Z\t ]] 00:43:09.198 17:09:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:09.198 17:09:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:09.198 17:09:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.198 17:09:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.198 17:09:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:09.456 17:09:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.5jDJYPLmTO == \/\t\m\p\/\t\m\p\.\5\j\D\J\Y\P\L\m\T\O ]] 00:43:09.456 17:09:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:09.456 17:09:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.456 17:09:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:09.456 17:09:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.456 17:09:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.456 17:09:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:09.456 17:09:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:09.456 17:09:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:09.456 17:09:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:09.456 17:09:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.456 17:09:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.456 17:09:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:09.456 17:09:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.719 17:09:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:09.719 17:09:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:09.719 17:09:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:09.719 [2024-12-06 17:09:58.387157] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:09.982 nvme0n1 00:43:09.982 17:09:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:09.982 17:09:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:09.982 17:09:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.982 17:09:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.240 17:09:58 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:10.240 17:09:58 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:10.240 Running I/O for 1 seconds... 00:43:11.617 21430.00 IOPS, 83.71 MiB/s 00:43:11.617 Latency(us) 00:43:11.617 [2024-12-06T16:10:00.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:11.617 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:11.617 nvme0n1 : 1.00 21480.98 83.91 0.00 0.00 5948.92 2075.31 16274.77 00:43:11.617 [2024-12-06T16:10:00.310Z] =================================================================================================================== 00:43:11.617 [2024-12-06T16:10:00.310Z] Total : 21480.98 83.91 0.00 0.00 5948.92 2075.31 16274.77 00:43:11.617 { 00:43:11.617 "results": [ 00:43:11.617 { 00:43:11.617 "job": "nvme0n1", 00:43:11.617 "core_mask": "0x2", 00:43:11.617 "workload": "randrw", 00:43:11.617 "percentage": 50, 00:43:11.617 "status": "finished", 00:43:11.617 "queue_depth": 128, 00:43:11.617 "io_size": 4096, 00:43:11.617 "runtime": 1.003632, 00:43:11.617 "iops": 21480.98107672932, 00:43:11.617 "mibps": 83.9100823309739, 00:43:11.617 "io_failed": 0, 00:43:11.617 "io_timeout": 0, 00:43:11.617 "avg_latency_us": 5948.9157852095805, 00:43:11.617 "min_latency_us": 2075.306666666667, 00:43:11.617 "max_latency_us": 16274.773333333333 00:43:11.617 } 00:43:11.617 ], 00:43:11.617 "core_count": 1 00:43:11.617 } 00:43:11.617 17:09:59 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:11.617 17:09:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:11.617 17:10:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.617 17:10:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:11.617 17:10:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.617 17:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.876 17:10:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:11.876 17:10:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:11.876 17:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:11.876 [2024-12-06 17:10:00.545032] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_re[2024-12-06 17:10:00.545033] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:11.876 ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:11.876 [2024-12-06 17:10:00.546023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd17910 (9): Bad file descriptor 00:43:11.876 [2024-12-06 17:10:00.547026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:11.876 [2024-12-06 17:10:00.547039] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:11.876 [2024-12-06 17:10:00.547045] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:11.876 [2024-12-06 17:10:00.547054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:11.876 request: 00:43:11.876 { 00:43:11.876 "name": "nvme0", 00:43:11.876 "trtype": "tcp", 00:43:11.876 "traddr": "127.0.0.1", 00:43:11.876 "adrfam": "ipv4", 00:43:11.876 "trsvcid": "4420", 00:43:11.876 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:11.876 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:11.876 "prchk_reftag": false, 00:43:11.876 "prchk_guard": false, 00:43:11.876 "hdgst": false, 00:43:11.876 "ddgst": false, 00:43:11.876 "psk": "key1", 00:43:11.876 "allow_unrecognized_csi": false, 00:43:11.876 "method": "bdev_nvme_attach_controller", 00:43:11.876 "req_id": 1 00:43:11.876 } 00:43:11.876 Got JSON-RPC error response 00:43:11.876 response: 00:43:11.876 { 00:43:11.876 "code": -5, 00:43:11.876 "message": "Input/output error" 00:43:11.876 } 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:11.876 17:10:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:11.876 17:10:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:11.876 17:10:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:11.876 17:10:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:11.876 17:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:11.876 17:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.876 17:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.134 17:10:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:12.134 17:10:00 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:12.134 17:10:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:12.134 17:10:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:12.134 17:10:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:12.134 17:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.134 17:10:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:12.393 17:10:00 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:12.393 17:10:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:12.393 17:10:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:12.393 17:10:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:12.393 17:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:12.651 17:10:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:12.651 17:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.651 17:10:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:12.910 17:10:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:12.910 17:10:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.P98UB2FcZt 00:43:12.910 17:10:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:12.910 17:10:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:12.910 17:10:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:12.910 17:10:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:12.911 17:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:12.911 [2024-12-06 17:10:01.500488] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.P98UB2FcZt': 0100660 00:43:12.911 [2024-12-06 17:10:01.500507] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:12.911 request: 00:43:12.911 { 00:43:12.911 "name": "key0", 00:43:12.911 "path": "/tmp/tmp.P98UB2FcZt", 00:43:12.911 "method": "keyring_file_add_key", 00:43:12.911 "req_id": 1 00:43:12.911 } 00:43:12.911 Got JSON-RPC error response 00:43:12.911 response: 00:43:12.911 { 00:43:12.911 "code": -1, 00:43:12.911 "message": "Operation not permitted" 00:43:12.911 } 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:12.911 17:10:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:12.911 17:10:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.P98UB2FcZt 00:43:12.911 17:10:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:12.911 17:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.P98UB2FcZt 00:43:13.169 17:10:01 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.P98UB2FcZt 00:43:13.169 17:10:01 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:13.169 17:10:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:13.169 17:10:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:13.169 17:10:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:13.169 17:10:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:13.169 17:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.169 17:10:01 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:13.169 17:10:01 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.169 17:10:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:13.169 17:10:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.169 17:10:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:13.169 17:10:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:13.169 17:10:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:13.169 17:10:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:13.169 17:10:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.169 17:10:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.428 [2024-12-06 17:10:01.985724] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.P98UB2FcZt': No such file or directory 00:43:13.428 [2024-12-06 17:10:01.985740] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:13.428 [2024-12-06 17:10:01.985753] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:13.428 [2024-12-06 17:10:01.985759] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:13.428 [2024-12-06 17:10:01.985765] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:13.428 [2024-12-06 17:10:01.985770] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:13.428 request: 00:43:13.428 { 00:43:13.428 "name": "nvme0", 00:43:13.428 "trtype": "tcp", 00:43:13.428 "traddr": "127.0.0.1", 00:43:13.428 "adrfam": "ipv4", 00:43:13.428 "trsvcid": "4420", 00:43:13.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:13.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:13.428 "prchk_reftag": false, 00:43:13.428 "prchk_guard": false, 00:43:13.428 "hdgst": false, 00:43:13.428 "ddgst": false, 00:43:13.428 "psk": "key0", 00:43:13.428 "allow_unrecognized_csi": false, 00:43:13.428 "method": "bdev_nvme_attach_controller", 00:43:13.428 "req_id": 1 00:43:13.428 } 00:43:13.428 Got JSON-RPC error response 00:43:13.428 response: 00:43:13.428 { 00:43:13.428 "code": -19, 00:43:13.428 "message": "No such device" 00:43:13.428 } 00:43:13.428 17:10:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:13.428 17:10:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:13.428 17:10:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:13.428 17:10:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:13.428 17:10:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:13.428 17:10:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:13.687 17:10:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.H6RHt3LtkK 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:13.687 17:10:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:13.687 17:10:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:13.687 17:10:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:13.687 17:10:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:13.687 17:10:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:13.687 17:10:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.H6RHt3LtkK 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.H6RHt3LtkK 00:43:13.687 17:10:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.H6RHt3LtkK 00:43:13.687 17:10:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H6RHt3LtkK 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H6RHt3LtkK 00:43:13.687 17:10:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.687 17:10:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.945 nvme0n1 00:43:13.945 17:10:02 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:13.945 17:10:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:13.945 17:10:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:13.945 17:10:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:13.945 17:10:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:13.945 17:10:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:14.203 17:10:02 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:14.203 17:10:02 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:14.203 17:10:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:14.462 17:10:02 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:14.462 17:10:02 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:14.462 17:10:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:14.462 17:10:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:14.462 17:10:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.462 17:10:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:14.462 17:10:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:14.462 17:10:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:14.462 17:10:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:14.462 17:10:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:14.462 17:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.462 17:10:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:14.721 17:10:03 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:14.721 17:10:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:14.721 17:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:14.979 17:10:03 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:14.979 17:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.979 17:10:03 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:14.979 17:10:03 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:14.979 17:10:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H6RHt3LtkK 00:43:14.979 17:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H6RHt3LtkK 00:43:15.238 17:10:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5jDJYPLmTO 00:43:15.238 17:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5jDJYPLmTO 00:43:15.238 17:10:03 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:15.238 17:10:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:15.496 nvme0n1 00:43:15.497 17:10:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:15.497 17:10:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:15.757 17:10:04 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:15.757 "subsystems": [ 00:43:15.757 { 00:43:15.757 "subsystem": "keyring", 00:43:15.757 "config": [ 00:43:15.757 { 00:43:15.757 "method": "keyring_file_add_key", 00:43:15.757 "params": { 00:43:15.757 "name": "key0", 00:43:15.757 "path": "/tmp/tmp.H6RHt3LtkK" 00:43:15.757 } 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "method": "keyring_file_add_key", 00:43:15.757 "params": { 00:43:15.757 "name": "key1", 00:43:15.757 "path": "/tmp/tmp.5jDJYPLmTO" 00:43:15.757 } 00:43:15.757 } 00:43:15.757 ] 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "subsystem": "iobuf", 00:43:15.757 "config": [ 00:43:15.757 { 00:43:15.757 "method": "iobuf_set_options", 00:43:15.757 "params": { 00:43:15.757 "small_pool_count": 8192, 00:43:15.757 "large_pool_count": 1024, 00:43:15.757 "small_bufsize": 8192, 00:43:15.757 "large_bufsize": 135168, 00:43:15.757 "enable_numa": false 00:43:15.757 } 00:43:15.757 } 00:43:15.757 ] 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "subsystem": "sock", 00:43:15.757 "config": [ 00:43:15.757 { 00:43:15.757 "method": "sock_set_default_impl", 00:43:15.757 "params": { 00:43:15.757 "impl_name": "posix" 00:43:15.757 } 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "method": "sock_impl_set_options", 00:43:15.757 "params": { 00:43:15.757 "impl_name": "ssl", 00:43:15.757 "recv_buf_size": 4096, 00:43:15.757 "send_buf_size": 4096, 00:43:15.757 "enable_recv_pipe": true, 00:43:15.757 "enable_quickack": false, 00:43:15.757 "enable_placement_id": 0, 00:43:15.757 "enable_zerocopy_send_server": true, 00:43:15.757 "enable_zerocopy_send_client": false, 00:43:15.757 "zerocopy_threshold": 0, 00:43:15.757 "tls_version": 0, 00:43:15.757 "enable_ktls": false 00:43:15.757 } 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "method": "sock_impl_set_options", 00:43:15.757 "params": { 00:43:15.757 "impl_name": "posix", 00:43:15.757 "recv_buf_size": 2097152, 00:43:15.757 "send_buf_size": 2097152, 00:43:15.757 "enable_recv_pipe": true, 00:43:15.757 "enable_quickack": false, 00:43:15.757 "enable_placement_id": 0, 00:43:15.757 "enable_zerocopy_send_server": true, 00:43:15.757 "enable_zerocopy_send_client": false, 00:43:15.757 "zerocopy_threshold": 0, 00:43:15.757 "tls_version": 0, 00:43:15.757 "enable_ktls": false 00:43:15.757 } 00:43:15.757 } 00:43:15.757 ] 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "subsystem": "vmd", 00:43:15.757 "config": [] 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "subsystem": "accel", 00:43:15.757 "config": [ 00:43:15.757 { 00:43:15.757 "method": "accel_set_options", 00:43:15.757 "params": { 00:43:15.757 "small_cache_size": 128, 00:43:15.757 "large_cache_size": 16, 00:43:15.757 "task_count": 2048, 00:43:15.757 "sequence_count": 2048, 00:43:15.757 "buf_count": 2048 00:43:15.757 } 00:43:15.757 } 00:43:15.757 ] 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "subsystem": "bdev", 00:43:15.757 "config": [ 00:43:15.757 { 00:43:15.757 "method": "bdev_set_options", 00:43:15.757 "params": { 00:43:15.757 "bdev_io_pool_size": 65535, 00:43:15.757 "bdev_io_cache_size": 256, 00:43:15.757 "bdev_auto_examine": true, 00:43:15.757 "iobuf_small_cache_size": 128, 00:43:15.757 "iobuf_large_cache_size": 16 00:43:15.757 } 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "method": "bdev_raid_set_options", 00:43:15.757 "params": { 00:43:15.757 "process_window_size_kb": 1024, 00:43:15.757 "process_max_bandwidth_mb_sec": 0 00:43:15.757 } 00:43:15.757 }, 00:43:15.757 { 00:43:15.757 "method": "bdev_iscsi_set_options", 00:43:15.758 "params": { 00:43:15.758 "timeout_sec": 30 00:43:15.758 } 00:43:15.758 }, 00:43:15.758 { 00:43:15.758 "method": "bdev_nvme_set_options", 00:43:15.758 "params": { 00:43:15.758 "action_on_timeout": "none", 00:43:15.758 "timeout_us": 0, 00:43:15.758 "timeout_admin_us": 0, 00:43:15.758 "keep_alive_timeout_ms": 10000, 00:43:15.758 "arbitration_burst": 0, 00:43:15.758 "low_priority_weight": 0, 00:43:15.758 "medium_priority_weight": 0, 00:43:15.758 "high_priority_weight": 0, 00:43:15.758 "nvme_adminq_poll_period_us": 10000, 00:43:15.758 "nvme_ioq_poll_period_us": 0, 00:43:15.758 "io_queue_requests": 512, 00:43:15.758 "delay_cmd_submit": true, 00:43:15.758 "transport_retry_count": 4, 00:43:15.758 "bdev_retry_count": 3, 00:43:15.758 "transport_ack_timeout": 0, 00:43:15.758 "ctrlr_loss_timeout_sec": 0, 00:43:15.758 "reconnect_delay_sec": 0, 00:43:15.758 "fast_io_fail_timeout_sec": 0, 00:43:15.758 "disable_auto_failback": false, 00:43:15.758 "generate_uuids": false, 00:43:15.758 "transport_tos": 0, 00:43:15.758 "nvme_error_stat": false, 00:43:15.758 "rdma_srq_size": 0, 00:43:15.758 "io_path_stat": false, 00:43:15.758 "allow_accel_sequence": false, 00:43:15.758 "rdma_max_cq_size": 0, 00:43:15.758 "rdma_cm_event_timeout_ms": 0, 00:43:15.758 "dhchap_digests": [ 00:43:15.758 "sha256", 00:43:15.758 "sha384", 00:43:15.758 "sha512" 00:43:15.758 ], 00:43:15.758 "dhchap_dhgroups": [ 00:43:15.758 "null", 00:43:15.758 "ffdhe2048", 00:43:15.758 "ffdhe3072", 00:43:15.758 "ffdhe4096", 00:43:15.758 "ffdhe6144", 00:43:15.758 "ffdhe8192" 00:43:15.758 ] 00:43:15.758 } 00:43:15.758 }, 00:43:15.758 { 00:43:15.758 "method": "bdev_nvme_attach_controller", 00:43:15.758 "params": { 00:43:15.758 "name": "nvme0", 00:43:15.758 "trtype": "TCP", 00:43:15.758 "adrfam": "IPv4", 00:43:15.758 "traddr": "127.0.0.1", 00:43:15.758 "trsvcid": "4420", 00:43:15.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:15.758 "prchk_reftag": false, 00:43:15.758 "prchk_guard": false, 00:43:15.758 "ctrlr_loss_timeout_sec": 0, 00:43:15.758 "reconnect_delay_sec": 0, 00:43:15.758 "fast_io_fail_timeout_sec": 0, 00:43:15.758 "psk": "key0", 00:43:15.758 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:15.758 "hdgst": false, 00:43:15.758 "ddgst": false, 00:43:15.758 "multipath": "multipath" 00:43:15.758 } 00:43:15.758 }, 00:43:15.758 { 00:43:15.758 "method": "bdev_nvme_set_hotplug", 00:43:15.758 "params": { 00:43:15.758 "period_us": 100000, 00:43:15.758 "enable": false 00:43:15.758 } 00:43:15.758 }, 00:43:15.758 { 00:43:15.758 "method": "bdev_wait_for_examine" 00:43:15.758 } 00:43:15.758 ] 00:43:15.758 }, 00:43:15.758 { 00:43:15.758 "subsystem": "nbd", 00:43:15.758 "config": [] 00:43:15.758 } 00:43:15.758 ] 00:43:15.758 }' 00:43:15.758 17:10:04 keyring_file -- keyring/file.sh@115 -- # killprocess 2664267 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2664267 ']' 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2664267 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664267 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664267' 00:43:15.758 killing process with pid 2664267 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@973 -- # kill 2664267 00:43:15.758 Received shutdown signal, test time was about 1.000000 seconds 00:43:15.758 00:43:15.758 Latency(us) 00:43:15.758 [2024-12-06T16:10:04.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:15.758 [2024-12-06T16:10:04.451Z] =================================================================================================================== 00:43:15.758 [2024-12-06T16:10:04.451Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:15.758 17:10:04 keyring_file -- common/autotest_common.sh@978 -- # wait 2664267 00:43:16.017 17:10:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=2665893 00:43:16.017 17:10:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2665893 /var/tmp/bperf.sock 00:43:16.017 17:10:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2665893 ']' 00:43:16.018 17:10:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:16.018 17:10:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:16.018 17:10:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:16.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:16.018 17:10:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:16.018 17:10:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:16.018 17:10:04 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:16.018 17:10:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:16.018 "subsystems": [ 00:43:16.018 { 00:43:16.018 "subsystem": "keyring", 00:43:16.018 "config": [ 00:43:16.018 { 00:43:16.018 "method": "keyring_file_add_key", 00:43:16.018 "params": { 00:43:16.018 "name": "key0", 00:43:16.018 "path": "/tmp/tmp.H6RHt3LtkK" 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "keyring_file_add_key", 00:43:16.018 "params": { 00:43:16.018 "name": "key1", 00:43:16.018 "path": "/tmp/tmp.5jDJYPLmTO" 00:43:16.018 } 00:43:16.018 } 00:43:16.018 ] 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "subsystem": "iobuf", 00:43:16.018 "config": [ 00:43:16.018 { 00:43:16.018 "method": "iobuf_set_options", 00:43:16.018 "params": { 00:43:16.018 "small_pool_count": 8192, 00:43:16.018 "large_pool_count": 1024, 00:43:16.018 "small_bufsize": 8192, 00:43:16.018 "large_bufsize": 135168, 00:43:16.018 "enable_numa": false 00:43:16.018 } 00:43:16.018 } 00:43:16.018 ] 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "subsystem": "sock", 00:43:16.018 "config": [ 00:43:16.018 { 00:43:16.018 "method": "sock_set_default_impl", 00:43:16.018 "params": { 00:43:16.018 "impl_name": "posix" 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "sock_impl_set_options", 00:43:16.018 "params": { 00:43:16.018 "impl_name": "ssl", 00:43:16.018 "recv_buf_size": 4096, 00:43:16.018 "send_buf_size": 4096, 00:43:16.018 "enable_recv_pipe": true, 00:43:16.018 "enable_quickack": false, 00:43:16.018 "enable_placement_id": 0, 00:43:16.018 "enable_zerocopy_send_server": true, 00:43:16.018 "enable_zerocopy_send_client": false, 00:43:16.018 "zerocopy_threshold": 0, 00:43:16.018 "tls_version": 0, 00:43:16.018 "enable_ktls": false 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "sock_impl_set_options", 00:43:16.018 "params": { 00:43:16.018 "impl_name": "posix", 00:43:16.018 "recv_buf_size": 2097152, 00:43:16.018 "send_buf_size": 2097152, 00:43:16.018 "enable_recv_pipe": true, 00:43:16.018 "enable_quickack": false, 00:43:16.018 "enable_placement_id": 0, 00:43:16.018 "enable_zerocopy_send_server": true, 00:43:16.018 "enable_zerocopy_send_client": false, 00:43:16.018 "zerocopy_threshold": 0, 00:43:16.018 "tls_version": 0, 00:43:16.018 "enable_ktls": false 00:43:16.018 } 00:43:16.018 } 00:43:16.018 ] 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "subsystem": "vmd", 00:43:16.018 "config": [] 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "subsystem": "accel", 00:43:16.018 "config": [ 00:43:16.018 { 00:43:16.018 "method": "accel_set_options", 00:43:16.018 "params": { 00:43:16.018 "small_cache_size": 128, 00:43:16.018 "large_cache_size": 16, 00:43:16.018 "task_count": 2048, 00:43:16.018 "sequence_count": 2048, 00:43:16.018 "buf_count": 2048 00:43:16.018 } 00:43:16.018 } 00:43:16.018 ] 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "subsystem": "bdev", 00:43:16.018 "config": [ 00:43:16.018 { 00:43:16.018 "method": "bdev_set_options", 00:43:16.018 "params": { 00:43:16.018 "bdev_io_pool_size": 65535, 00:43:16.018 "bdev_io_cache_size": 256, 00:43:16.018 "bdev_auto_examine": true, 00:43:16.018 "iobuf_small_cache_size": 128, 00:43:16.018 "iobuf_large_cache_size": 16 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "bdev_raid_set_options", 00:43:16.018 "params": { 00:43:16.018 "process_window_size_kb": 1024, 00:43:16.018 "process_max_bandwidth_mb_sec": 0 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "bdev_iscsi_set_options", 00:43:16.018 "params": { 00:43:16.018 "timeout_sec": 30 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "bdev_nvme_set_options", 00:43:16.018 "params": { 00:43:16.018 "action_on_timeout": "none", 00:43:16.018 "timeout_us": 0, 00:43:16.018 "timeout_admin_us": 0, 00:43:16.018 "keep_alive_timeout_ms": 10000, 00:43:16.018 "arbitration_burst": 0, 00:43:16.018 "low_priority_weight": 0, 00:43:16.018 "medium_priority_weight": 0, 00:43:16.018 "high_priority_weight": 0, 00:43:16.018 "nvme_adminq_poll_period_us": 10000, 00:43:16.018 "nvme_ioq_poll_period_us": 0, 00:43:16.018 "io_queue_requests": 512, 00:43:16.018 "delay_cmd_submit": true, 00:43:16.018 "transport_retry_count": 4, 00:43:16.018 "bdev_retry_count": 3, 00:43:16.018 "transport_ack_timeout": 0, 00:43:16.018 "ctrlr_loss_timeout_sec": 0, 00:43:16.018 "reconnect_delay_sec": 0, 00:43:16.018 "fast_io_fail_timeout_sec": 0, 00:43:16.018 "disable_auto_failback": false, 00:43:16.018 "generate_uuids": false, 00:43:16.018 "transport_tos": 0, 00:43:16.018 "nvme_error_stat": false, 00:43:16.018 "rdma_srq_size": 0, 00:43:16.018 "io_path_stat": false, 00:43:16.018 "allow_accel_sequence": false, 00:43:16.018 "rdma_max_cq_size": 0, 00:43:16.018 "rdma_cm_event_timeout_ms": 0, 00:43:16.018 "dhchap_digests": [ 00:43:16.018 "sha256", 00:43:16.018 "sha384", 00:43:16.018 "sha512" 00:43:16.018 ], 00:43:16.018 "dhchap_dhgroups": [ 00:43:16.018 "null", 00:43:16.018 "ffdhe2048", 00:43:16.018 "ffdhe3072", 00:43:16.018 "ffdhe4096", 00:43:16.018 "ffdhe6144", 00:43:16.018 "ffdhe8192" 00:43:16.018 ] 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "bdev_nvme_attach_controller", 00:43:16.018 "params": { 00:43:16.018 "name": "nvme0", 00:43:16.018 "trtype": "TCP", 00:43:16.018 "adrfam": "IPv4", 00:43:16.018 "traddr": "127.0.0.1", 00:43:16.018 "trsvcid": "4420", 00:43:16.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:16.018 "prchk_reftag": false, 00:43:16.018 "prchk_guard": false, 00:43:16.018 "ctrlr_loss_timeout_sec": 0, 00:43:16.018 "reconnect_delay_sec": 0, 00:43:16.018 "fast_io_fail_timeout_sec": 0, 00:43:16.018 "psk": "key0", 00:43:16.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:16.018 "hdgst": false, 00:43:16.018 "ddgst": false, 00:43:16.018 "multipath": "multipath" 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "bdev_nvme_set_hotplug", 00:43:16.018 "params": { 00:43:16.018 "period_us": 100000, 00:43:16.018 "enable": false 00:43:16.018 } 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "method": "bdev_wait_for_examine" 00:43:16.018 } 00:43:16.018 ] 00:43:16.018 }, 00:43:16.018 { 00:43:16.018 "subsystem": "nbd", 00:43:16.018 "config": [] 00:43:16.018 } 00:43:16.018 ] 00:43:16.018 }' 00:43:16.018 [2024-12-06 17:10:04.499555] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:43:16.018 [2024-12-06 17:10:04.499611] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665893 ] 00:43:16.018 [2024-12-06 17:10:04.562834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:16.018 [2024-12-06 17:10:04.578944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:16.339 [2024-12-06 17:10:04.717723] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:16.645 17:10:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:16.645 17:10:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:16.645 17:10:05 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:16.645 17:10:05 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:16.645 17:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.920 17:10:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:16.920 17:10:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.920 17:10:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:16.920 17:10:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.920 17:10:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:17.179 17:10:05 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:17.179 17:10:05 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:17.179 17:10:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:17.179 17:10:05 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:17.438 17:10:05 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:17.438 17:10:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:17.438 17:10:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.H6RHt3LtkK /tmp/tmp.5jDJYPLmTO 00:43:17.438 17:10:05 keyring_file -- keyring/file.sh@20 -- # killprocess 2665893 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2665893 ']' 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2665893 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2665893 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2665893' 00:43:17.438 killing process with pid 2665893 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@973 -- # kill 2665893 00:43:17.438 Received shutdown signal, test time was about 1.000000 seconds 00:43:17.438 00:43:17.438 Latency(us) 00:43:17.438 [2024-12-06T16:10:06.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:17.438 [2024-12-06T16:10:06.131Z] =================================================================================================================== 00:43:17.438 [2024-12-06T16:10:06.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:17.438 17:10:05 keyring_file -- common/autotest_common.sh@978 -- # wait 2665893 00:43:17.438 17:10:06 keyring_file -- keyring/file.sh@21 -- # killprocess 2664098 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2664098 ']' 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2664098 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2664098 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2664098' 00:43:17.438 killing process with pid 2664098 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@973 -- # kill 2664098 00:43:17.438 17:10:06 keyring_file -- common/autotest_common.sh@978 -- # wait 2664098 00:43:17.696 00:43:17.696 real 0m9.791s 00:43:17.696 user 0m24.110s 00:43:17.696 sys 0m2.170s 00:43:17.696 17:10:06 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:17.696 17:10:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:17.696 ************************************ 00:43:17.696 END TEST keyring_file 00:43:17.696 ************************************ 00:43:17.696 17:10:06 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:17.697 17:10:06 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:17.697 17:10:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:17.697 17:10:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:17.697 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:43:17.697 ************************************ 00:43:17.697 START TEST keyring_linux 00:43:17.697 ************************************ 00:43:17.697 17:10:06 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:17.697 Joined session keyring: 831677982 00:43:17.697 * Looking for test storage... 00:43:17.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:17.697 17:10:06 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:17.697 17:10:06 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:43:17.697 17:10:06 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:17.957 17:10:06 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:17.957 17:10:06 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:17.957 17:10:06 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:17.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:17.957 --rc genhtml_branch_coverage=1 00:43:17.957 --rc genhtml_function_coverage=1 00:43:17.957 --rc genhtml_legend=1 00:43:17.957 --rc geninfo_all_blocks=1 00:43:17.957 --rc geninfo_unexecuted_blocks=1 00:43:17.957 00:43:17.957 ' 00:43:17.957 17:10:06 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:17.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:17.957 --rc genhtml_branch_coverage=1 00:43:17.957 --rc genhtml_function_coverage=1 00:43:17.957 --rc genhtml_legend=1 00:43:17.957 --rc geninfo_all_blocks=1 00:43:17.957 --rc geninfo_unexecuted_blocks=1 00:43:17.957 00:43:17.957 ' 00:43:17.957 17:10:06 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:17.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:17.957 --rc genhtml_branch_coverage=1 00:43:17.957 --rc genhtml_function_coverage=1 00:43:17.957 --rc genhtml_legend=1 00:43:17.957 --rc geninfo_all_blocks=1 00:43:17.957 --rc geninfo_unexecuted_blocks=1 00:43:17.957 00:43:17.957 ' 00:43:17.957 17:10:06 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:17.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:17.957 --rc genhtml_branch_coverage=1 00:43:17.957 --rc genhtml_function_coverage=1 00:43:17.957 --rc genhtml_legend=1 00:43:17.957 --rc geninfo_all_blocks=1 00:43:17.957 --rc geninfo_unexecuted_blocks=1 00:43:17.957 00:43:17.957 ' 00:43:17.957 17:10:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:17.957 17:10:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:17.957 17:10:06 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:17.957 17:10:06 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:17.958 17:10:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.958 17:10:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.958 17:10:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.958 17:10:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:17.958 17:10:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:17.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:17.958 /tmp/:spdk-test:key0 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:17.958 17:10:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:17.958 17:10:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:17.958 /tmp/:spdk-test:key1 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2666429 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2666429 00:43:17.958 17:10:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2666429 ']' 00:43:17.958 17:10:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:17.958 17:10:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:17.958 17:10:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:17.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:17.958 17:10:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:17.958 17:10:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:17.958 17:10:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:17.958 [2024-12-06 17:10:06.560201] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:43:17.958 [2024-12-06 17:10:06.560259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666429 ] 00:43:17.958 [2024-12-06 17:10:06.622858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:17.958 [2024-12-06 17:10:06.639565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:18.218 17:10:06 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:18.218 17:10:06 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:18.218 17:10:06 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:18.218 17:10:06 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:18.218 17:10:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:18.218 [2024-12-06 17:10:06.793976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:18.218 null0 00:43:18.218 [2024-12-06 17:10:06.826027] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:18.218 [2024-12-06 17:10:06.826390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:18.218 17:10:06 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:18.218 17:10:06 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:18.218 415721303 00:43:18.218 17:10:06 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:18.218 547754410 00:43:18.218 17:10:06 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2666629 00:43:18.218 17:10:06 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2666629 /var/tmp/bperf.sock 00:43:18.218 17:10:06 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2666629 ']' 00:43:18.218 17:10:06 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:18.219 17:10:06 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:18.219 17:10:06 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:18.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:18.219 17:10:06 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:18.219 17:10:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:18.219 17:10:06 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:18.219 [2024-12-06 17:10:06.883992] Starting SPDK v25.01-pre git sha1 a5e6ecf28 / DPDK 22.11.4 initialization... 00:43:18.219 [2024-12-06 17:10:06.884040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2666629 ] 00:43:18.478 [2024-12-06 17:10:06.946336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:18.478 [2024-12-06 17:10:06.962744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:18.478 17:10:07 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:18.478 17:10:07 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:18.478 17:10:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:18.478 17:10:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:18.478 17:10:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:18.478 17:10:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:18.737 17:10:07 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:18.737 17:10:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:18.995 [2024-12-06 17:10:07.505116] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:18.995 nvme0n1 00:43:18.995 17:10:07 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:18.996 17:10:07 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:18.996 17:10:07 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:18.996 17:10:07 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:18.996 17:10:07 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:18.996 17:10:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:19.254 17:10:07 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:19.254 17:10:07 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:19.254 17:10:07 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:19.254 17:10:07 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:19.254 17:10:07 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:19.254 17:10:07 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:19.254 17:10:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:19.255 17:10:07 keyring_linux -- keyring/linux.sh@25 -- # sn=415721303 00:43:19.255 17:10:07 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:19.255 17:10:07 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:19.255 17:10:07 keyring_linux -- keyring/linux.sh@26 -- # [[ 415721303 == \4\1\5\7\2\1\3\0\3 ]] 00:43:19.255 17:10:07 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 415721303 00:43:19.255 17:10:07 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:19.255 17:10:07 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:19.514 Running I/O for 1 seconds... 00:43:20.453 24120.00 IOPS, 94.22 MiB/s 00:43:20.453 Latency(us) 00:43:20.453 [2024-12-06T16:10:09.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.453 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:20.453 nvme0n1 : 1.01 24121.95 94.23 0.00 0.00 5290.46 4041.39 13052.59 00:43:20.453 [2024-12-06T16:10:09.146Z] =================================================================================================================== 00:43:20.453 [2024-12-06T16:10:09.146Z] Total : 24121.95 94.23 0.00 0.00 5290.46 4041.39 13052.59 00:43:20.453 { 00:43:20.453 "results": [ 00:43:20.453 { 00:43:20.453 "job": "nvme0n1", 00:43:20.453 "core_mask": "0x2", 00:43:20.453 "workload": "randread", 00:43:20.453 "status": "finished", 00:43:20.453 "queue_depth": 128, 00:43:20.453 "io_size": 4096, 00:43:20.453 "runtime": 1.005267, 00:43:20.453 "iops": 24121.94969097762, 00:43:20.453 "mibps": 94.22636598038133, 00:43:20.453 "io_failed": 0, 00:43:20.453 "io_timeout": 0, 00:43:20.453 "avg_latency_us": 5290.458057651862, 00:43:20.453 "min_latency_us": 4041.3866666666668, 00:43:20.453 "max_latency_us": 13052.586666666666 00:43:20.453 } 00:43:20.453 ], 00:43:20.453 "core_count": 1 00:43:20.453 } 00:43:20.453 17:10:09 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:20.453 17:10:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:20.713 17:10:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:20.713 17:10:09 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.713 17:10:09 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:20.713 17:10:09 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.713 17:10:09 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:20.713 17:10:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.713 17:10:09 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:20.713 17:10:09 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.713 17:10:09 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.713 17:10:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.974 [2024-12-06 17:10:09.494907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:20.974 [2024-12-06 17:10:09.495680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdd6a0 (107): Transport endpoint is not connected 00:43:20.974 [2024-12-06 17:10:09.496677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fdd6a0 (9): Bad file descriptor 00:43:20.974 [2024-12-06 17:10:09.497678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:20.974 [2024-12-06 17:10:09.497687] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:20.974 [2024-12-06 17:10:09.497693] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:20.974 [2024-12-06 17:10:09.497699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:20.974 request: 00:43:20.974 { 00:43:20.974 "name": "nvme0", 00:43:20.974 "trtype": "tcp", 00:43:20.974 "traddr": "127.0.0.1", 00:43:20.974 "adrfam": "ipv4", 00:43:20.974 "trsvcid": "4420", 00:43:20.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:20.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:20.974 "prchk_reftag": false, 00:43:20.974 "prchk_guard": false, 00:43:20.974 "hdgst": false, 00:43:20.974 "ddgst": false, 00:43:20.974 "psk": ":spdk-test:key1", 00:43:20.974 "allow_unrecognized_csi": false, 00:43:20.974 "method": "bdev_nvme_attach_controller", 00:43:20.974 "req_id": 1 00:43:20.974 } 00:43:20.974 Got JSON-RPC error response 00:43:20.974 response: 00:43:20.974 { 00:43:20.974 "code": -5, 00:43:20.974 "message": "Input/output error" 00:43:20.974 } 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@33 -- # sn=415721303 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 415721303 00:43:20.974 1 links removed 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@33 -- # sn=547754410 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 547754410 00:43:20.974 1 links removed 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2666629 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2666629 ']' 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2666629 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2666629 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2666629' 00:43:20.974 killing process with pid 2666629 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 2666629 00:43:20.974 Received shutdown signal, test time was about 1.000000 seconds 00:43:20.974 00:43:20.974 Latency(us) 00:43:20.974 [2024-12-06T16:10:09.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.974 [2024-12-06T16:10:09.667Z] =================================================================================================================== 00:43:20.974 [2024-12-06T16:10:09.667Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 2666629 00:43:20.974 17:10:09 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2666429 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2666429 ']' 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2666429 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.974 17:10:09 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2666429 00:43:21.235 17:10:09 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:21.235 17:10:09 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:21.235 17:10:09 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2666429' 00:43:21.235 killing process with pid 2666429 00:43:21.235 17:10:09 keyring_linux -- common/autotest_common.sh@973 -- # kill 2666429 00:43:21.235 17:10:09 keyring_linux -- common/autotest_common.sh@978 -- # wait 2666429 00:43:21.235 00:43:21.235 real 0m3.553s 00:43:21.235 user 0m6.777s 00:43:21.235 sys 0m1.150s 00:43:21.235 17:10:09 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:21.235 17:10:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:21.235 ************************************ 00:43:21.235 END TEST keyring_linux 00:43:21.235 ************************************ 00:43:21.235 17:10:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:21.235 17:10:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:21.235 17:10:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:21.235 17:10:09 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:21.235 17:10:09 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:21.235 17:10:09 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:21.235 17:10:09 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:21.235 17:10:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:21.235 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:43:21.235 17:10:09 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:21.235 17:10:09 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:21.235 17:10:09 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:21.235 17:10:09 -- common/autotest_common.sh@10 -- # set +x 00:43:26.522 INFO: APP EXITING 00:43:26.522 INFO: killing all VMs 00:43:26.522 INFO: killing vhost app 00:43:26.522 INFO: EXIT DONE 00:43:29.068 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:65:00.0 (144d a80a): Already using the nvme driver 00:43:29.068 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:43:29.068 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:43:31.612 Cleaning 00:43:31.612 Removing: /var/run/dpdk/spdk0/config 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:31.612 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:31.612 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:31.612 Removing: /var/run/dpdk/spdk1/config 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:31.612 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:31.612 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:31.612 Removing: /var/run/dpdk/spdk2/config 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:31.612 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:31.612 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:31.612 Removing: /var/run/dpdk/spdk3/config 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:31.612 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:31.612 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:31.612 Removing: /var/run/dpdk/spdk4/config 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:31.612 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:31.613 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:31.613 Removing: /dev/shm/bdev_svc_trace.1 00:43:31.613 Removing: /dev/shm/nvmf_trace.0 00:43:31.613 Removing: /dev/shm/spdk_tgt_trace.pid1961641 00:43:31.613 Removing: /var/run/dpdk/spdk0 00:43:31.613 Removing: /var/run/dpdk/spdk1 00:43:31.613 Removing: /var/run/dpdk/spdk2 00:43:31.613 Removing: /var/run/dpdk/spdk3 00:43:31.613 Removing: /var/run/dpdk/spdk4 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1959877 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1961641 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1962203 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1963440 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1963575 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1964866 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1964960 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1965102 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1966228 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1967017 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1967465 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1967587 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1967991 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1968124 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1968432 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1968788 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1969170 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1969807 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1973591 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1973807 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1973984 00:43:31.613 Removing: /var/run/dpdk/spdk_pid1973993 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1974362 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1974486 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1975005 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1975068 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1975284 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1975432 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1975478 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1975642 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1976244 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1976378 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1976677 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1981331 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1986725 00:43:31.871 Removing: /var/run/dpdk/spdk_pid1999668 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2000359 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2005745 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2006103 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2011489 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2018589 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2021990 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2035400 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2046804 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2049137 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2050468 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2071803 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2076617 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2183760 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2190641 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2198055 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2206395 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2206397 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2207418 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2208720 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2209735 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2210467 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2210643 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2210905 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2211067 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2211071 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2212175 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2213396 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2214424 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2215245 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2215398 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2215727 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2216837 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2218100 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2228538 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2262976 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2268574 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2270893 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2273217 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2273244 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2273407 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2273592 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2273981 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2276364 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2277378 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2277757 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2280490 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2281151 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2281858 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2286934 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2294064 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2294065 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2294066 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2299276 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2304214 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2310555 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2358476 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2363779 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2371145 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2372633 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2374256 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2375960 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2381665 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2387131 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2392043 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2401340 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2401497 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2406719 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2407010 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2407339 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2407915 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2408005 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2409460 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2412099 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2414215 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2416526 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2418536 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2420845 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2428571 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2429393 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2430591 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2431848 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2438271 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2441735 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2448308 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2455033 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2465940 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2474861 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2474910 00:43:31.871 Removing: /var/run/dpdk/spdk_pid2497628 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2498302 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2498980 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2499654 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2500376 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2501060 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2501734 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2502409 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2507453 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2507784 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2515469 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2515719 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2522634 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2528567 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2540886 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2541857 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2546915 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2547268 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2552309 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2559491 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2562733 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2575201 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2586230 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2588866 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2590085 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2610340 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2615081 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2618568 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2626283 00:43:32.130 Removing: /var/run/dpdk/spdk_pid2626326 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2632333 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2634845 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2637355 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2638855 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2641394 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2642917 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2653924 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2654591 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2655258 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2658304 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2658972 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2659640 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2664098 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2664267 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2665893 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2666429 00:43:32.131 Removing: /var/run/dpdk/spdk_pid2666629 00:43:32.131 Clean 00:43:32.131 17:10:20 -- common/autotest_common.sh@1453 -- # return 0 00:43:32.131 17:10:20 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:32.131 17:10:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:32.131 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:43:32.131 17:10:20 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:32.131 17:10:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:32.131 17:10:20 -- common/autotest_common.sh@10 -- # set +x 00:43:32.131 17:10:20 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:32.131 17:10:20 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:32.131 17:10:20 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:32.131 17:10:20 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:32.131 17:10:20 -- spdk/autotest.sh@398 -- # hostname 00:43:32.131 17:10:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:32.391 geninfo: WARNING: invalid characters removed from testname! 00:43:50.497 17:10:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:53.031 17:10:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:54.410 17:10:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:56.319 17:10:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:57.696 17:10:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:43:59.601 17:10:48 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:01.513 17:10:49 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:01.513 17:10:49 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:01.513 17:10:49 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:01.513 17:10:49 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:01.513 17:10:49 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:01.513 17:10:49 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:01.513 + [[ -n 1865023 ]] 00:44:01.513 + sudo kill 1865023 00:44:01.524 [Pipeline] } 00:44:01.544 [Pipeline] // stage 00:44:01.550 [Pipeline] } 00:44:01.568 [Pipeline] // timeout 00:44:01.573 [Pipeline] } 00:44:01.590 [Pipeline] // catchError 00:44:01.595 [Pipeline] } 00:44:01.612 [Pipeline] // wrap 00:44:01.618 [Pipeline] } 00:44:01.633 [Pipeline] // catchError 00:44:01.644 [Pipeline] stage 00:44:01.647 [Pipeline] { (Epilogue) 00:44:01.663 [Pipeline] catchError 00:44:01.665 [Pipeline] { 00:44:01.681 [Pipeline] echo 00:44:01.683 Cleanup processes 00:44:01.690 [Pipeline] sh 00:44:01.979 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:01.979 2679273 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:01.996 [Pipeline] sh 00:44:02.278 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:02.278 ++ grep -v 'sudo pgrep' 00:44:02.278 ++ awk '{print $1}' 00:44:02.278 + sudo kill -9 00:44:02.278 + true 00:44:02.292 [Pipeline] sh 00:44:02.579 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:14.806 [Pipeline] sh 00:44:15.090 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:15.090 Artifacts sizes are good 00:44:15.105 [Pipeline] archiveArtifacts 00:44:15.114 Archiving artifacts 00:44:15.312 [Pipeline] sh 00:44:15.678 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:15.694 [Pipeline] cleanWs 00:44:15.705 [WS-CLEANUP] Deleting project workspace... 00:44:15.705 [WS-CLEANUP] Deferred wipeout is used... 00:44:15.712 [WS-CLEANUP] done 00:44:15.714 [Pipeline] } 00:44:15.732 [Pipeline] // catchError 00:44:15.745 [Pipeline] sh 00:44:16.029 + logger -p user.info -t JENKINS-CI 00:44:16.038 [Pipeline] } 00:44:16.051 [Pipeline] // stage 00:44:16.056 [Pipeline] } 00:44:16.070 [Pipeline] // node 00:44:16.076 [Pipeline] End of Pipeline 00:44:16.116 Finished: SUCCESS